the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of low-cost Raspberry Pi sensors for structure-from-motion reconstructions of glacier calving fronts
Duncan J. Quincey
Mark W. Smith
Download
- Final revised paper (published on 27 Jan 2023)
- Supplement to the final revised paper
- Preprint (discussion started on 09 Aug 2022)
Interactive discussion
Status: closed
-
RC1: 'Comment on nhess-2022-201', Karen Anderson, 08 Sep 2022
NHESS2022-201 Taylor et al
This is a nice piece of methodological work, which delivers insights on a new approach for low-cost unattended monitoring of calving glaciers via low-cost raspberry pi operated cameras. It's a neat idea and the proof-of-concept is done well. The piece was relatively uncomplicated to review, because it is quite clear in its layout. The major findings are that the raspberry pi-operated cameras can deliver quite good quality photogrammetric reconstructions of glacier fronts, and compared to equivalent data captured from a drone flying along the glacier front – the results are not hugely different, which evidences the capability of the cameras for this task. What is quite impressive is that the very low cost raspberry pi system delivers precision thresholds set for DSLR workflows. Monte carlo point-cloud to point-cloud methods are employed to perform a robust comparison between the raspberry pi and drone datasets. Overall the paper is uncontroversial but provides a useful reference point for those wanting to develop raspberry pi imaging for photogrammetry, or timelapse monitoring for glacial applications as well as in other fields.
The main thing which I think needs a little finessing is that the piece has a title which is about photogrammetry but the paper is also focused on timelapse. And you can have timelapse functionality without photogrammetry on the pi – e.g. one camera instead of an array of cameras. So I felt that a bit more careful structuring of the argument could benefit the clarity of the paper and make that distinction a bit more visible. There are some minor points to address, largely relating to some areas needing a little more detail.
Minor points
Line 65 – ‘we have designed…’ – this sounds like a methodology point not something that belongs in introduction. I think the introduction should focus on reviewing the camera technology / hardware here rather than linking to your specific experiment or motivations. Maybe just lose the first sentence of this paragraph and start with ‘raspberry pi computers are small…’
A general point is that timelapse capability can also be achieved very cheaply (less than £120) from wildlife cameras (e.g. the type that are typically used for motion-sense camera trapping). You do pick up on this a little in the discussion but not at the beginning of the piece (e.g. table 1). Not all trailcams have a timelapse capability, but some do, and many also have in-built solar trickle charge capacity. I’ve also seen papers using them as phenocams. It would have been interesting to see how data from these compared to the pis, but I appreciate that it’s too late to ask for that. On that note I also wondered why you didn't do a like-for-like comparison to SLR method from the same vantage points with the pi? The drone may be the most widely used method for glacial front reconstruction but it is not for timelapse, I think… Perhaps there needs to be an explanation added about this distinction. I think the experiment described around line 140 is addressing this but the explanation is a bit opaque (e.g. “the monitoring network would be cheaper as fewer cameras are required”…)
If this is about time-series monitoring of glacier frontal dynamics – is the spatial reconstruction from the boat-mounted surveys a useful demonstration of the temporal case study? I am referring to the statement at the beginning of the paper where you state that “Arrays of fixed cameras can be positioned around a glacier front to capture images repeatedly over long time periods. The resulting imagery can then be used to photogrammetrically generate 3D models at a high temporal resolution and analyse change over days, months, or years.” So I guess that one way would be to position multiple pis facing the calving front and trigger them simultaneously, to generate SfM products. I felt that the paper warranted a discussion about this high cadence mode of operation which seems to be largely what you’re advocating – vs the boat mounted transect operation that you actually carried out.
Figure 4 – it shows the two comparative point clouds from the pi and the drone and I note that the colouration of the renderings is different. Is this due to some different camera settings used (e.g. exposure etc?) or something else? It made me think that the methods section needs some added information about these aspexts given that other papers have commented on the impact of camera settings on the quality of SfM outputs. I guess it may not be possible to change the settings on the pi camera but this is not the case on the drone camera, so does warrant some discussion.
Line 125 – I read your argument for flying the drone closer to the glacier than the boat but I think if you want to compare pi to drone it would have made more sense to use a distance for each which did a better job of balancing the camera resolution capabilities with the distance. It seems like being further from the glacier with a poorer quality camera will give you a negatively biased estimate of the quality of the pi camera. Perhaps warrants some discussion.
Table 1 – I think the cost given here is inaccurate – you did not use a Pi ZeroW so this price is not describing the system used. You could perhaps put a range of price here to indicate the low-entry point zero-W and the version you used.
The thing that is lacking from the paper is an open source sharing of the build recipe (e.g. list of components) and code for setting up the pi to run as timelapse camera. I think this should be added as supplementary information if the paper is accepted.
Line 120 – what is the minimum capability for timelapse in the pi camera? And why approx. 10 second intervals – is it uncertain how often it triggers (e.g. why ‘approx’).
Line 150 – this pre-alignment with the UAV data sounds great in this context, but what would happen if someone used the pi without the drone survey…? Presumably the registration would then be arbitrary. I realise that you did this for the purposes of cloud-to-cloud comparison in M3C2 but thinking more broadly does the lack of georeference information matter in the timelapse? Perhaps add some clarification to the manuscript in this regard.
Figure 6 needs a colour bar scale legend
Can you comment a little more on the patterns of errors in Figures 4 and 6. You wrote that the jagged edges of ice result in higher errors but the nature of the patterns in the M3C2 results shows that there are patches of high positive errors neighbouring patches of large negative errors (e.g. blocks of blue and red next to one another). What is the cause of this systematic patterning of error – why are the differences in the point clouds organised like this?
Thanks for the opportunity to review the paper.
Citation: https://doi.org/10.5194/nhess-2022-201-RC1 - AC1: 'Reply on RC1', Liam Taylor, 03 Jan 2023
-
RC2: 'Review of Taylor et al. Evaluation of low-cost Raspberry Pi sensors for photogrammetry of glacier calving fronts', Penelope How, 23 Nov 2022
Review of Taylor et al. "Evaluation of low-cost Raspeberry Pi sensors for photogrammetry of glacier calving fronts"
Taylor et al. present a Raspberry Pi system for capturing time-lapse images and producing glacial photogrammetry measurements. Alongside UAV surveying, the Raspberry Pi system is used to produce SfM models of a calving glacier in Iceland to evaluate its potential uses in glaciology. The study effectively shows the value of this system to the glaciology community, producing accurate SfM models and demonstrating its applications in operational monitoring of glaciers. I recommend publication of this work after minor corrections, with my main comments regarding the scope and focus of the paper, and the inclusion of a more extensive reference list. I was really excited to review this paper when I saw the request for reviewers, and it did not disappoint. Congratulations on an interesting paper that was very enjoyable to read.
Main comments1. The use of the word photogrammetry in the title is slightly mis-leading given that much of the focus is on the application of the Raspberry Pi system in Structure-from-Motion (SfM). Photogrammetry more typically refers to traditional methods of scale factoring, tracking and georectification. SfM is a newer method that, although falls under the umbrella term photogrammetry, should be more clear here to avoid ambiguity. This is not to say that the Raspberry Pi system cannot be used for more traditional photogrammetry techniques (and you clearly demonstrate that it can), but given that the focus of the paper is on its SfM applications the title should be changed to reflect this. In addition, there are instances in the Introduction (L19, L59, L61, and L79) where the term "photogrammetry" should be changed to "SfM" as you are referring to specific SfM techniques.
2. In the Abstract and Introduction (e.g. L8-10, L31-34), it is stated that this work will be useful for monitoring small mountain glaciers in land-terminating settings and GLOF events. I think the scope of the authors' work reaches beyond this and is also very valuable for the monitoring of marine-terminating glaciers, where ice ballistics and tsunamis generated from calving are a major hazard to bystanders and cruise ships. The Discussion and Conclusion more effectively demonstrates the use of the Raspberry Pi across different glacier settings and for a variety of applications, however, I would like to see this also expressed in the Abstract and Introduction.
3. The main advantage of the Raspberry Pi system stated throughout is that it is cost-effective compared to off-the-shelf time-lapse camera systems (e.g. L12, L48, L63). Whilst I agree that this is a cost-effective system, I think the main advantage of this system is that it has greater capabilities and adaptability than a standard off-the-shelf system. Such a system can have more sophisticated programming and functionality. Not only will near-real-time monitoring be an option, but also near-real-time processing of images, which could technically be conducted on-site in the Raspberry Pi - this could mean that light-weight processed data (e.g. GLOF water level, ice velocity, terminus position, supraglacial lake area etc.) could be transmitted rather than the bulky image data. This capability is seldom found in a typical DSLR camera, or provided by time-lapse installation distributors. I think this is more clearly explained in the Discussion section of the paper, however, I would like to see these advantages focused on in the first sections, rather than its cost-effectiveness.
4. I would like to see more previous glacial photogrammetry work referred to throughout the paper. The literature is heavily weighted to UAV studies, and I would like to see more terrestrial time-lapse papers referred to, given that this is the main application of the Raspberry Pi system. I have provided many in the minor comments and reference list at the end of this review, and I would also like to see others added by the authors.
Minor commentsL8-10: See major comment #2 regarding broadening the scope of the paper, and not just focusing on land-terminating calving glaciers. You could include more examples here of hazards caused by calving, such as ice ballistics, tsunami waves and iceberg collapses.
L12: High equipment costs are just one reason that monitoring systems are difficult to implement. Monitoring systems are challenging to set up as well because of the lack of infrastructure (e.g. cell/Iridium coverage) for near-real-time data transmission, and challenges in implementing tracking/detection with fully-automated workflows. See main comment #3.
L18: "Raspberry Pi cameras represent..." >> "Raspberry Pi cameras present..."
L28-34: Same as comment on L8-10 (see main comment #2)
L35: Calving rate can not only be calculated through iceberg detection, but also by knowing the ice velocity and terminus change (e.g. Luckman et al., 2015; Schild et al., 2018; and many others)
L37: "smaller mountain glaciers" >> "calving glaciers"
L43: Measured glacier velocities from oblique time-lapse images are three-dimensional measurements, transformed from the image plane to three-dimensional space through the process of georectification. Equally, two-dimensional areas of calving events have proved a good measure of calving event size (e.g. Bunce et al., 2021; Holmes et al., 2021). Therefore, I think the statements that single stationary cameras have limited 2D measurements and offer little in the detection of calving magnitude is misleading and should be corrected.
L45: Whilst multi-camera, Structure-from-Motion set-ups are ideal for constraining calving volumes, the physics of a calving event can also be captured with high-temporal-resolution single time-lapse sequences (e.g. Holmes et al., 2021).
L48: I think that while start-up costs for UAV surveying are indeed expensive, it can become cost-effective in the long-term (as long as the UAV is maintained correctly and is not damaged). Personnel and fieldwork logistics are likely the biggest costs, which are common across most glaciology research with a fieldwork component. The value of the Raspberry Pi system is that it could reduce the number of re-visits, as data could be processed and transmitted in an automated manner rather than downloaded on-site. See main comment #3 regarding the cost-effectiveness of the Raspberry Pi system.
L54-56: There are many more examples and applications where DSLR cameras have been positioned at glaciers to capture a plethora of glaciological measurements (e.g. glacier velocities, supraglacial lake change, snow coverage/snowline positions, crevasse tracing, terminus position change, calving). Please include more examples, starting with the reference list at the end of this review (see main comment #4).
L62: Other monoscopic photogrammetry toolboxes in glaciology: ImGRAFT (Messerli and Grinsted, 2015), EMT (Schwalbe et al. 2017), and Pointcatcher (James et al., 2016)
L63: I think the trail cameras from the Kangerlussuaq set-up by Mallalieu et al. (2020) were not very expensive, so please consider changing this statement.
L80: "incorporating low-cost sensors in glacier monitoring systems." >> "incorporating low-cost, high-functionality sensors in glacier monitoring systems."
Table 1: Please include camera focal lengths, to indicate how you arrived at the horizontal field of view (FOV) angles. In the case of the Canon Rebel T5, you could provide the focal length of the kit zoom lens, 18-55 mm. I'm actually surprised that the Raspberry Pi FOV is so narrow, given that the lens has a small focal length (16 mm).
L120: I see you interchangeably refer to the system as "Pi" and "Raspberry Pi". I would suggest sticking with one to be consist through the paper.
L130: Whilst I like this figure, I would like to see the field photos alongside an annotated photo/diagram of the Raspberry Pi system, outlining the key components. I would suggest removing photo B and replacing it with a close-up photo of the Raspberry Pi that clearly shows the system.
L149: What is the accuracy of the RTK system?
L154: "high quality points clouds" >> "high quality point clouds"
L155: What is a mild filter in Agisoft Metashape? Is this a low-pass smoothing correction? If you could define the filtering method then it should be included here.
L214: I think this is the first instance that the acronym "SfM" is used. Please can you define this earlier in the manuscript, the first time you use the term "Structure from Motion (SfM)" in the main body, and then use "SfM" throughout the rest of the manuscript.
L245-L252: Camera positioning is often a big limitation in glaciology studies given that you have to fit the set-up to the environment you are working in (e.g. working around proglacial lakes, inaccessible areas and differing ground stability). Therefore, positioning cameras at precise heights and angles is sometimes not possible. In fact, the majority of glacier photogrammetry studies have cameras positioned above the glacier front in order to yield the most accurate data (e.g. Holmes et al., 2021; How et al., 2017; Medrzycka et al., 2018; Schild et al., 2016) - please include more examples to demonstrate that many studies have adopted this approach.
L254-260: What are these alternate methods? Can you give some examples of where alternate methods have been used, and do they yield measurements that are as accurate as GCPs? My understanding is that precise GPS positioning has been used as a good alternative to GCPs in UAV studies (e.g. Chudley et al., 2019; Jouvet et al., 2019), but not so confidently in terrestrial SfM studies (e.g. Mallalieu et al., 2019). I think the Raspberry Pi set-up also has applications in broader photogrammetry and not just SfM studies. That being said, GCPs are essential for oblique terrestrial photogrammetry (e.g. a single time-lapse camera placed on land at a calving front) because the camera pose (i.e. its angular position in the real world environment - yaw, pitch, roll) has to be estimated from GCPs (Messerli and Grinsted, 2015; Schwalbe et al., 2017) in order to produce an accurate projection model. Additionally, GCPs are an effective way to define and constrain the error of the projection model (How et al., 2020). I think this is an interesting point and you are correct in tackling it here, but perhaps there is scope to open this up to a bigger discussion.
L264-273: I think you are discrediting your Raspberry Pi system too much here! Yes, the spatial coverage of a terrestrial camera is limited compared to a UAV; however:
1. Terrestrial cameras can be placed higher up on the glacier tongue in certain settings to capture processes such as ice flow, crevasse propagation, and lake drainage (e.g. How et al., 2017; Fahrner et al., 2021)
2. A key advantage is that it can be placed in the field for long periods of time and produce much longer, higher-temporal-resolution time-series
Additionally, this Raspberry Pi system has the potential for operational monitoring in an automated manner. It is highly unlikely that the glaciology community will ever be able to use UAVs operationally in an completely automated manner (i.e. no pilot).
L288: "timelapse" >> "time-lapse". This is the convention adopted by the glaciology community generally, so please also change all other instances of this.L294-315: This is a great two paragraphs for showcasing the advantages of your Raspberry Pi system. I think you have conveyed its potential in operational monitoring very clearly. Please can you include more examples of potential applications in glaciology to demonstrate the potential of its far-reaching impact, specifically in the section L295-300; such as monitoring GLOFs (e.g. Muslow, Koschitzki and Maas, 2015), supraglacial lake drainage (Danielson and Sharp, 2013), iceberg tracking (Kienholz et al. 2019), grounding line position (Rosenau et al., 2013), and seasonal snowline migration (Messerli et al., 2022)
L330-345: I think another recommendation should be that whilst SfM-generated models can be produced without GCPs, it is advisable to collect GCPs in order to produce accurate photogrammetric measurements from a Raspberry Pi system (especially if only using a single system instead of an array of systems)
ReferencesBunce et al. (2020) Influence of glacier runoff and near-terminus subglacial hydrology on frontal ablation at a large Greenlandic tidewater glacier. J. Glaciol. 67(262), 343-352. https://doi.org/10.1017/jog.2020.109
Danielson and Sharp (2013) Development and application of a time-lapse approach analysis method to investigate the link between tidewater glacier flow variations and supraglacial lake drainage events. J. Glaciol. 59(214), 287-302. https://doi.org/10.3189/2013JoG12J108
Fahrner et al. (2021) Using sub-daily timelapse imagery to investigate the behaviour of Narsap Sermia, SW Greenland. EGU General Assembly Conference Abstract, EGU21-7792. https://doi.org/10.5194/egusphere-egu21-7792
Holmes et al. (2021) Calving at Ryder Glacier, Northern Greenland. JGR Earth Surface 126(4), e2020JF005872. https://doi.org/10.1029/2020JF005872
How et al. (2017) Rapidly changing subglacial hydrological pathways at a tidewater glacier revealed through simultaneous observations of water pressure, supraglacial lakes, meltwater plumes and surface velocities. Cryosphere 11, 2691-2710. https://doi.org/10.5194/tc-11-2691-2017
How et al. (2020) PyTrx: A Python-Based Monoscopic Terrestrial Photogrammetry Toolset for Glaciology. Front. Earth Sci. 8:21. doi: 10.3389/feart.2020.00021
James et al. (2016) Pointcatcher software: analysis of glacial time-lapse photography and integration with multitemporal digital elevation models. J. Glaciol. 62(131), 159-169. https://doi.org/10.1017/jog.2016.27
Kienholz et al. (2019) Tracking icebergs with time-lapse photography and sparse optical flow, LeConte Bay, Alaska, 2016-2017. J. Glaciol. 65(250), 195-211. https://doi.org/10.1017/jog.2018.105
Luckman, A. et al. (2015) Calving rates at tidewater glaciers vary strongly with ocean temperature. Nat. Comm. 6, 8566. https://doi.org/10.1038/ncomms9566
Mallalieu et al. (2017) An integrated Structure-from-Motion and time-lapse technique for quantifying ice-margin dynamics. J. Glaciol. 63(242)
Medrzycka et al. (2018) Calving behaviour at Rink Isbrae, West Greenland, from Time-Lapse Photos. Arct. Antarc. Alp. Res. 48(2), 263-277. https://doi.org/10.1657/AAAR0015-059
Messerli and Grinsted (2015) Image georectification and feature tracking toolbox: ImGRAFT. Geosci. Instrum. Methods Data Sys. 4, 23–34. https://doi.org/10.5194/gi-4-23-2015
Messerli et al. (2022) Snow cover evolution at Qasigiannguit Glacier, southwest Greenland: A comparison of time-lapse imagery and mass balance data. Front. Earth Sci. 10:970026. https://doi.org/10.3389/feart.2022.970026
Muslow, Koschitzki and Maas (2015) Photogrammetric monitoring of glacier margin lakes. Geom. Nat. Hazard. Risk 6(5-7), 600-613. https://doi.org/10.1080/19475705.2014.939232
Rosenau et al. (2013) Grounding line migration and high resolution calving dynamics of Jakobshavn Isbrae, West Greenland. J. Geophys. Res. Earth Surf. 118(2), 382-395. https://doi.org/10.1029/2012JF002515
Schwalbe et al. (2017) The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences. Earth. Surf. Dyn. 5(4), 861-879. https://doi.org/10.5194/esurf-5-861-2017
Schild et al. (2018) Glacier calving rates due to subglacial discharge, fjord circulation, and free convection" JGR Earth Surface 123(9), 2189-2204. https://doi.org/10.1029/2017JF004520
Citation: https://doi.org/10.5194/nhess-2022-201-RC2 - AC2: 'Reply on RC2', Liam Taylor, 03 Jan 2023