Glacier calving fronts are highly dynamic environments that are becoming ubiquitous as glaciers recede and, in many cases, develop proglacial lakes. Monitoring of calving fronts is necessary to fully quantify the glacier ablation budget and to warn nearby communities of the threat of hazards, such as glacial lake outburst floods (GLOFs), tsunami waves, and iceberg collapses. Time-lapse camera arrays, with structure-from-motion photogrammetry, can produce regular 3D models of glaciers to monitor changes in the ice but are seldom incorporated into monitoring systems owing to the high cost of equipment. In this proof-of-concept study at Fjallsjökull, Iceland, we present and test a low-cost, highly adaptable camera system based on Raspberry Pi computers and compare the resulting point cloud data to a reference cloud generated using an unoccupied aerial vehicle (UAV). The mean absolute difference between the Raspberry Pi and UAV point clouds is found to be 0.301 m with a standard deviation of 0.738 m. We find that high-resolution point clouds can be robustly generated from cameras positioned up to 1.5 km from the glacier (mean absolute difference 0.341 m, standard deviation 0.742 m). Combined, these experiments suggest that for monitoring calving events in glaciers, Raspberry Pi cameras are an affordable, flexible, and practical option for future scientific research. Owing to the connectivity capabilities of Raspberry Pi computers, this opens the possibility for real-time structure-from-motion reconstructions of glacier calving fronts for deployment as an early warning system to calving-triggered GLOFs.
Monitoring glacier calving fronts is becoming increasingly important as climate warming changes the stability of the cryosphere. Globally, glacier frontal positions have receded rapidly in recent decades (Marzeion et al., 2014; Zemp et al., 2015), leading to an increased threat of glacial lake outburst floods (GLOFs) from newly formed proglacial lakes at the glacier terminus (Tweed and Carrivick, 2015), or tsunami waves and iceberg collapse at marine-terminating glaciers (Minowa et al., 2018). Large ice calving events and their impact into glacial lakes can trigger violent waves (Lüthi and Vieli, 2016) and ultimately GLOF events if the wave goes on to overtop the impounding dam, though both the magnitude and frequency of this phenomenon are poorly quantified owing to a lack of appropriate monitoring (Emmer et al., 2015; Veh et al., 2019). Satellites are able to provide near-continuous observations of lake growth (Jawak et al., 2015), hazard development (Quincey et al., 2005; Rounce et al., 2017), and, over large glaciers, calving rate (Luckman et al., 2015; Sulak et al., 2017; Shiggins et al., 2023). However, to measure frontal dynamics at a high spatial and temporal resolution, which is particularly necessary over calving glaciers, monitoring requirements can only be met by in situ sensors.
Accurate 3D models of glaciers and their calving fronts are necessary to fully evaluate the hazards they pose (Kääb, 2000; Fugazza et al., 2018) and to better understand frontal dynamics (Ryan et al., 2015). Where in situ camera sensors have been used to monitor glacier fronts as part of an early warning system, stationary cameras have previously been used to relay regular images to be analysed externally (Fallourd et al., 2010; Rosenau et al., 2013; Giordan et al., 2016; How et al., 2020). This can be useful for monitoring glacier velocity, snowfall, and calving dynamics (Holmes et al., 2021), but remains a 2D snapshot of glacier behaviour which only allows qualitative insights into calving volume (Bunce et al., 2021). 3D models, on the other hand, permit more detailed analysis and allow calving events to be quantified in size (James et al., 2014; Mallalieu et al., 2020). Unoccupied aerial vehicles (UAVs) have been used regularly to capture high-resolution 3D models of glacier fronts (Ryan et al., 2015; Bhardwaj et al., 2016; Chudley et al., 2019), but, as yet, these systems are not autonomous and are therefore dependent on an operator being present, as well as often being highly expensive (many thousands of dollars), including the staff-based cost of revisiting these sites.
Arrays of fixed cameras can be positioned around a glacier front to capture images repeatedly over long time periods. The resulting imagery can then be used to photogrammetrically generate 3D models at a high temporal resolution and analyse change over days, months, or years. Off-the-shelf time-lapse cameras provide some of the cheapest ways of reliably collecting imagery for repeat photogrammetry and have been deployed at Russell Glacier, Greenland, to monitor seasonal calving dynamics (Mallalieu et al., 2017). Elsewhere in glaciology, time-lapse arrays using more expensive DSLR-grade cameras have been used for repeat structure-from-motion (SfM) to quantify ice cliff melt on Langtang Glacier at high spatial resolution (Kneib et al., 2022). In other disciplines, time-lapse arrays for SfM have been used to monitor the soil surface during storms (Eltner et al., 2017), the stability of rock slopes (Kromer et al., 2019), and the evolution of thaw slumps (Armstrong et al., 2018), for example. The key limitation of these studies, and this setup design, is that a site revisit is necessary to collect data, and analysis is therefore far from real-time. Autonomous photogrammetry, whereby 3D models are created with no user input, is still in its infancy but shows great promise, with machine learning used to optimize camera positions (Eastwood et al., 2020), point cloud stacking to enhance time-lapse photogrammetry (Blanch et al., 2020), and user-friendly tool sets for monoscopic photogrammetry (e.g. PyTrx (How et al., 2020), ImGRAFT (Messerli and Grinsted, 2015) and EMT (Schwalbe and Maas, 2017)). Real-time data transmission is the next step in autonomous time-lapse photogrammetry, but trail cameras with cellular connectivity are many hundreds of dollars per unit, rendering this setup unaffordable for most monitoring schemes.
Raspberry Pi computers are small, are low cost, and were designed with the intention of teaching and learning programming in schools. Their ease of use and affordability means they have also been used extensively as field sensors in the geosciences (Ferdoush and Li, 2014) as the quality of their camera sensors have developed to a science-grade level (Pagnutti et al., 2017). In hazard management, Raspberry Pi cameras have been used as standalone monitoring systems to complement wider internet-of-things (IoT) networks (Aggarwal et al., 2018) and attached to UAVs to produce orthophotographs (Piras et al., 2017). In glacierized environments, the durability, low cost, and low power requirements of Raspberry Pis means they have been used to complement sensor networks, such as controlling the capture of DSLR-grade time-lapse cameras (Carvallo et al., 2017; Giordan et al., 2020) or as a ground station for UAV-based research (Chakraborty et al., 2019). However, to our knowledge, Raspberry Pis and low-cost camera modules have never been the focus of a glaciology investigation and their potential for SfM in the wider geosciences has yet to be fully realized. In addition, the flexibility provided by a fully programmable sensor could offer geoscientists the ability to tailor data acquisition and perform low-level in-field processing.
The aim of this study was, therefore, to evaluate the quality of Raspberry Pi imagery for photogrammetric processing, with a view to incorporating low-cost, high functionality sensors in glacier monitoring systems. Given that the highest accuracy glacier front 3D models gathered from photogrammetry are derived from UAV imagery (typical horizontal uncertainty of 0.12 m (0.14 m vertical) even in the absence of ground control points (Chudley et al., 2019)), we chose to use a UAV-based point cloud as our primary reference dataset. We intensively deployed both sensor systems (ground-based Pis and aerial UAV) at Fjallsjökull, Iceland, over a four-day period. As a secondary objective, we also sought to understand the limitations of Raspberry Pi by deploying Raspberry Pi sensors at a range of distances to the glacier front and removing images in the processing of point clouds to identify the fewest frames necessary for generating accurate 3D models.
Fjallsjökull is an outlet glacier of Öræfajökull, an
ice-covered volcano to the south of the wider Vatnajökull ice cap, in
south-east Iceland (Fig. 1). Recession and thinning of Fjallsjökull
has been underway since the end of the Little Ice Age, but has substantially
accelerated in recent decades owing to climate warming (Howarth
and Price, 1969; Chandler et al., 2020). Fjallsjökull terminates in a
large (
Fjallsjökull (flowing left-to-right), terminating in Fjallsárlón, captured by Planet Imagery on 10 September 2021. A–H denote the eight point cloud sub-sections generated by both the Raspberry Pi and UAV. X–Y denote the start and end of land-based data collection at approximately 25 m intervals along the shoreline, used to generate sub-section B from a distance.
We tested the Raspberry Pi high quality camera module with a 16 mm telephoto lens in comparison to images taken from a DJI Mavic 2 Pro UAV. We also tested the Raspberry Pi camera module V2 (of lower resolution, but a cheaper option), due to its science-grade radiometric calibration (Pagnutti et al., 2017), but initial tests indicated the quality of the long-range imagery was too low to proceed with generating 3D data. The Raspberry Pi camera was attached to a Raspberry Pi 4B computer with an LCD display to visualize images, and adjust focus, as they were captured. Technical comparisons of the setups are given in Table 1, and a list of components and our code for acquisition is given as Supplementary Information.
Comparison of technical specifications between Raspberry Pi and UAV sensors. Two typical time-lapse packages are provided as a comparison, following the setup from Mallalieu et al. (2017), using the MMS model of their wildlife camera to compare like-for-like connectivity with the Raspberry Pi, and Kienholz et al. (2019). The Raspberry Pi high quality camera module is fitted with a 16 mm telephoto lens. N/A stands for not applicable.
The Raspberry Pi was mounted in a fixed position on a boat which traversed
the southernmost
An overview of our data acquisition.
In order to test the limits of the Raspberry Pi, we performed additional analysis on sub-section B (Fig. 1). We collected images of the calving face from a portion of the shoreline of Fjallsárlón, shown as X to Y in Fig. 1, which ranged from 1.2 to 1.5 km from the calving face. Owing to bad weather, we only collected shoreline data for a limited section (covering sub-section B entirely) before the glacier was obscured from view by fog. This experiment allowed us to assess how the Raspberry Pi performed at long-range.
We also conducted an additional experiment on sub-section B to determine the performance of the camera under sub-optimal conditions by removing 21 of the 31 images captured by the boat transect and deriving point clouds from the remaining 10 camera positions. This reflects the reality of the trade-off between data quality and practical considerations. In theory, fewer images should result in a lower point density (Micheletti et al., 2015), but any time-lapse camera array produced using Raspberry Pis could be cheaper with fewer cameras required.
For images from both the Raspberry Pi and UAV, far cliffs (rock faces
flanking Fjallsjökull; Fig. 2a) were masked out prior to generating
tie points in Agisoft Metashape. Images from the UAV were georeferenced
using its onboard GNSS real-time kinematic positioning (RTK) system, with an
accuracy
Differences between point clouds from the Raspberry Pi and UAV were compared
using the multiscale model to model cloud comparison (M3C2) tool in
CloudCompare (Lague et al.,
2013). M3C2 calculates a series of core points from the Raspberry Pi cloud
and quantifies the distance to the UAV cloud about those points using
projection cylinders. This requires users to define key parameters,
including the width of normal (
The Raspberry Pi-based camera captured high-resolution imagery across the
full length of Fjallsjökull, at distances of up to 1.5 km. Glacier
textures and structures, such as debris patches and cracks in the ice, were
clearly visible within the photos captured by the Raspberry Pi (see example
imagery in Fig. 3) to aid 3D reconstruction. The ground sampling distance
(GSD) (the on-ground distance represented by one pixel) of the Raspberry Pi
at 500 m range was 3.80 cm and at 1.5 km was 11.41 cm (following
calculations by O'Connor et al., 2017). By comparison,
trail cameras used by Mallalieu et al. (2017),
at a mean distance of 785 m to Russell Glacier, achieved GSD of 28.05 cm. We
successfully generated point clouds along the front face of Fjallsjökull
using the 315 Raspberry Pi photos captured from the boat survey. Eight point
clouds were generated at high resolution, with survey lengths of
Example images captured by the Raspberry Pi sensor. Images A, B,
and C are taken from the boat transect (
Point clouds generated by the Raspberry Pi show a close comparison to those
derived from the UAV, with a mean absolute error of M3C2 distance of 0.301 m
and a standard deviation of 0.738 m across the Fjallsjökull calving face
(Table 2, Fig. 4). Point density of all Raspberry Pi point clouds was high
(
Key statistics and M3C2 comparison between point clouds generated by the Raspberry Pi and UAV. Frontal sub-sections can be seen in Fig. 1.
Fjallsjökull calving face running from northernmost (A) to southernmost (H) sections, as captured by the Raspberry Pi and UAV, and the M3C2 distance between each. Note varying scales between each section are to minimize white space in figure design.
Histogram of M3C2 distance values across the Fjallsjökull
calving face, combining all eight sub-sections together. There is a slight
positive skew in distribution (mean 4.31 cm). M3C2 distances are cropped
here to
We analysed sub-section B (
Sub-section B was generated using 31 images from the Raspberry Pi in Fig. 4 and Table 2, but time-lapse camera arrays are generally limited to 10–15 cameras due to cost. We found that using a reduced set of 10 images had
little impact on mean absolute error (0.263 m compared to 0.272 m using all
images, a 3 % decrease), but increased the standard deviation (0.627 m
compared to 0.461 m when using all images, a 36 % increase). This was most
notable towards the periphery of the point cloud (Fig. 6G), though the
point cloud contains more gaps than the original. Sub-section B is
approximately 250 m long and an individual image captures
Raspberry Pi cameras have rarely been tested in a glaciological setting, but
our analysis suggests that they could feasibly be deployed for long-term
monitoring purposes and, given their comparable quality to a UAV-derived
point cloud, have the potential to capture and quantify dynamic events (e.g. calving). Our data show that, from up to 1.5 km away, Raspberry Pi cameras
can detect small features within the ice and, when used to generated 3D
data, could identify, with confidence, any displacement of ice over
Improvements to research design, such as positioning cameras at a more optimal range of heights and angles, including above the glacier, are likely to reduce error in the Raspberry Pi point clouds (James and Robson, 2012; Bemis et al., 2014; Medrzycka et al., 2016; Holmes et al., 2021). A key limitation of our research was that images were captured only from a fixed height in the boat. Indeed, it is no coincidence that we observed the lowest errors between the two sensors at approximately the height level of the boat across all point clouds generated. We also speculate that systematic patterns of error, where high positive error neighbours high negative error such as in Fig. 6e, are due to varying angles of the glacier front being captured in the UAV model but not in the Raspberry Pi which only acquired front-facing images. Therefore, using a greater variety of camera angles and positions, for example by positioning cameras above the glacier front using nearby bedrock or moraines, would likely reduce error across the model (Mosbrucker et al., 2017; Medrzycka et al., 2016; Holmes et al., 2021). While our setup and analysis therefore may represent a conservative view of the potential use of Raspberry Pis in photogrammetry, it also reflects the practical considerations of working in field environments, which are frequently sub-optimal for deploying fixed cameras.
Exploring the limits of the Raspberry Pi sensor in comparison to
UAV.
Our study used relative georeferencing methods, removing the need for absolute positioning of the clouds using surveyed ground control points. Over glacier calving margins, placing ground control points is especially challenging and alternate methods are required (Mallalieu et al., 2017). For example, there is precedent in using the geospatial data from one point cloud to reference another when comparing sensors (Zhang et al., 2019; Luetzenburg et al., 2021). Alternatively, the positions of the cameras can be used to determine the georeferencing. This “direct georeferencing” can be achieved using GNSS-based aerial triangulation of fixed positions, or an on-board GPS unit that shares the clock of the camera such that a precise time-stamp of location can be associated with each of the acquired images (Chudley et al., 2019). Using this approach would allow comparison between repeat point clouds captured by the Raspberry Pi without any alignment to a UAV-based point cloud. For broader photogrammetry applications of the Raspberry Pi, particularly involving setups with only one camera, control points may be essential in capturing the camera position accurately (Schwalbe and Maas, 2017).
In this study, we cropped our point clouds to show only the front, flat,
calving face of Fjallsjökull. This involved significant trimming of
point clouds generated by the UAV (up to 40 % of points removed), while
the Raspberry Pi only required minor adjustments (
For studies making use of a typical DSLR-grade handheld camera, James and Robson (2012) and Smith et al. (2016) suggest a typical relative precision ratio of
Glacier dynamics at a calving margin are complex, but a low-cost time-lapse
camera array can offer insight into many key questions. Ice velocities at
the terminus of Fjallsjökull range from
In addition to calving events, terrestrial-based photogrammetry based on a Raspberry Pi system could monitor other important glacier dynamics at a low cost. There is a long history of using terrestrial photogrammetry for monitoring glacier thinning to quantify mass balance change of mountain glaciers, though this typically involves repeat site visits (Brecher and Thompson, 1993; Piermattei et al., 2015). Where surrounding topography allows, positioning Raspberry Pi cameras to look down on to the glacier surface would allow for SfM-based velocity calculation (Lewińska et al., 2021). Creep rates of rock glaciers have been successfully monitored through terrestrial photogrammetry (Kaufmann, 2012) and UAV surveys (Vivero and Lambiel, 2019), but again requiring repeated site visits. In each of these additional applications, low-cost Raspberry Pi cameras could produce accurate 3D models at a greater temporal frequency, without the logistical challenges, and financial costs, associated with repeating fieldwork.
Our boat-based study provides confidence that terrestrial-based, high cadence setups could produce regular, accurate 3D models. While not reported in these results, this author team have also successfully operated a separate Raspberry Pi camera in the Peruvian Andes, acquiring three images per day for 3 months using a timer switch and solar panel (Taylor, 2022). Given the customizability of Raspberry Pi cameras, their built-in connectivity, accuracy of acquiring 3D models, and robustness in cold environments, we are confident that arrays of fixed Raspberry Pi cameras could produce the first near real-time photogrammetry setup for continuous 3D monitoring of glacier calving fronts. Outside of photogrammetry, the programmability of Raspberry Pis as terrestrial cameras could offer advances in a broad range of settings, including GLOF management (Mulsow et al., 2015), supraglacial lake drainage (Danielson and Sharp, 2013), and iceberg tracking (Kienholz et al., 2019).
We speculate that, given likely sensor innovation and the decreasing cost of technology, the potential of low-cost sensors in glaciology research will only increase (Taylor et al., 2021). We envisage Raspberry Pi computers, or other microprocessors, to play a key role in this expansion. Almost all Raspberry Pi models have built-in WiFi, which allows data sharing between individual devices. With a WiFi radio on-site, providing a range of many hundreds of metres, individual cameras could autonomously send their data towards a central, more powerful Raspberry Pi unit for further analysis. Similar wireless sensor networks in glaciology have been produced to monitor seismicity (Anandakrishnan et al., 2022), ice surface temperatures (Singh et al., 2018), and subglacial hydrology (Prior-Jones et al., 2021). With the development of autonomous photogrammetry pipelines (Eastwood et al., 2019), a Raspberry Pi-based camera array system could, theoretically, run entirely independent of user input. Furthermore, the flexibility of Raspberry Pi computers, particularly their ability to operate multiple sensor types from one unit, opens up the possibility for wide sensor networks across glaciers – creating comprehensive digital monitoring of rapidly changing environments (Hart and Martinez, 2006; Taylor et al., 2021).
There exists considerable potential for low-cost sensors in mountain glacier communities, which are predominantly located in developing countries. Early warning systems situated around glacial lakes in the Himalaya have successfully prevented disaster during a number of GLOF events by allowing time for downstream communities to evacuate (Wang et al., 2022). By reducing the cost of camera-based sensors that are frequently used as part of a monitoring system (for example at Kyagar glacier in the Chinese Karakorum; Haemmig et al., 2014), more cameras can be situated to monitor calving rates, velocity, or stability at higher precision and accuracy in 3D. A low cost also means that more community-driven initiatives based on this Raspberry Pi system are viable. Such systems must be co-designed, and ultimately owned by, the communities they serve. Simple systems (such as Raspberry Pis), with components that are easily replaceable and with open access documentation, lowers the technical knowledge required to maintain an early warning system, and so a greater diversity of stakeholders can engage with its maintenance. Previous work has shown that diversity in engagement, and genuine understanding of the social structures on which communities are built, is essential for the success of early warning systems like these (Huggel et al., 2020).
While we suggest that Raspberry Pi cameras offer an alternative to
expensive DSLR cameras for time-lapse camera arrays, based on our
experiences we note a series of recommendations to future researchers and
communities looking to use this approach in their own systems:
Camera setup must be carefully considered and adopt best practice set by
others (e.g. Mallalieu et al., 2017) with
regards to angle, overlap, and positioning. Positioning cameras further away from the target ( There is only a narrow window of focus when using the Raspberry Pi 16 mm
telephoto lens, particularly over 1 km from the target, and an in-field
screen is essential to ensure correct setup. While SfM-generated models can be produced without the use of ground control
points, such as presented here, it is advisable to collect these to produce
accurate photogrammetric measurements from a Raspberry Pi and to allow for
comparison between point clouds. In the absence of an in-field screen, Secure Shell Protocol (SSH)-based access to the Raspberry Pi
can allow you to see image acquisitions on a computer screen or smartphone,
though leaving wireless connectivity enabled draws more power. Raspberry Pi computers draw very little power when commanded to turn on/off
between image acquisitions and can be sustained for many months using a
lead-acid battery and small solar panel. While Raspberry Pi cameras are robust and usable in sub-zero temperatures,
adequate weatherproofing must be used to ensure that the camera lens does
not fog over time.
We conducted a photogrammetric survey along the calving face of
Fjallsjökull, Iceland, to compare a SfM point cloud generated using
imagery from low-cost Raspberry Pi camera sensors to that derived using
imagery captured from a UAV. We successfully produced point clouds along the
front of Fjallsjökull, with a mean absolute M3C2 distance between point
clouds generated by the two sensors of 30.1 cm and a standard deviation of
73.8 cm. The Raspberry Pi camera also achieved sub-metre error at distances
of 1.2–1.5 km from the glacier. This error is comparable to DSLR-grade
sensors and highlights the potential for Raspberry Pi cameras to be used
more widely in glaciology research and monitoring systems. For certain
applications, we suggest, conservatively, that Raspberry Pi sensors are
viable for detecting change of magnitude
Datasets are openly available at
The supplement related to this article is available online at:
LT co-designed the study, conducted data analysis, and wrote the paper. DQ and MS co-designed the study, supervised data analysis, and edited the paper.
The contact author has declared that none of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was funded by a NERC Doctoral Training Partnership studentship to LST (NE/L002574/1), with additional support from the Geographical Club Award and Dudley Stamp Memorial Award of the Royal Geographical Society (PRA 15.20), the Mount Everest Foundation (19-13), and the Gilchrist Educational Trust. We are thankful to Hannah Barnett for assistance in the field and Joe Mallalieu for his invaluable advice in project design. We are also grateful to Robert Vanderbeck for supporting us through the challenges of conducting fieldwork during a pandemic.
This research has been supported by the UK Research and Innovation (grant no. NE/L002574/1), with additional support from the Geographical Club Award and Dudley Stamp Memorial Award of the Royal Geographical Society (grant no. PRA 15.20), the Mount Everest Foundation (grant no. 19-13), and the Gilchrist Educational Trust.
This paper was edited by Pascal Haegeli and reviewed by Karen Anderson and Penelope How.