Characterizing the Rate of Spread of Wildfires in Emerging Fire Environments of Northwestern Europe
- 1Tecnosylva, S.L Parque Tecnológico de León, 24004 León, Spain
- 2Department of Crop and Forest Sciences, University of Lleida, 25198 Lleida, Spain
- 3School of Geography, Earth and Environmental Sciences, University of Birmingham, Birmingham, UK
- 4Joint Research Unit CTFC - AGROTECNIO - CERCA, 25280 Solsona, Spain
- 5Department of Environmental Sciences, Wageningen University, PO box 47, 6700 AA Wageningen, The Netherlands
- 1Tecnosylva, S.L Parque Tecnológico de León, 24004 León, Spain
- 2Department of Crop and Forest Sciences, University of Lleida, 25198 Lleida, Spain
- 3School of Geography, Earth and Environmental Sciences, University of Birmingham, Birmingham, UK
- 4Joint Research Unit CTFC - AGROTECNIO - CERCA, 25280 Solsona, Spain
- 5Department of Environmental Sciences, Wageningen University, PO box 47, 6700 AA Wageningen, The Netherlands
Abstract. In recent years fires of greater magnitude have been documented throughout northwest Europe, and with several climate projections indicating future increases in fire activity in this temperate area, it is imperative to identify the status of fire in this region. This study unravels important unknowns about the state of the fire regime in northwest Europe by characterizing one of the key aspects of fire behavior, the rate of spread (ROS). Using an innovative approach to cluster VIIRS hotspots into fire perimeter isochrones to derive ROS, we identify the effects of land cover and season on fire rate of spread of 254 landscape fires that occurred between 2012 and 2020. Results reveal no significant differences between land cover types and there is a clear peak of ROS and burned area in the months of April and May. During this late spring period, 67 % of the burned area occurs and median fire runs are approximately 0.16 km/hr during a 12 hour overpass. Heightened ROS and burned area values persist in the bordering months of March and June suggesting that may present the extent of the fire season in northwestern Europe. Accurate data on ROS among the represented land cover types as well as periods of peak activity are essential for determining periods of elevated fire risk, the effectiveness of available suppression techniques as well as appropriate mitigation strategies (land and fuel management).
Victor Mario Tapia et al.
Status: final response (author comments only)
-
RC1: 'Comment on nhess-2022-47', Anonymous Referee #1, 31 Mar 2022
Tapia et al. presented a new approach to cluster VIIRS hotspots and derive the rate of spread (ROS) for each fire in this manuscript. They applied this approach to landscape fires in northwestern Europe, and examined the relationship between ROS and land cover type, season, as well as geographical locations (countries). The ROS is closely related to other fire behavior and the impact of fires on ecosystems. So the documentation of ROS across northwestern Europe in this manuscript provided a good reference for future studies. However, there are some issues in the current manuscript that prevent me from recommending it to be accepted by NHESS.
1. Need more complete description of the VIIRS data and the approach
For VIIRS fire data, the authors need to provide some background information on the satellite, the remote sensor, the data product (including the name, resolution, uncertainty, etc.), as well as the data filtering approaches. For example, what exact fire product did you use, the monthly data or the near real time (NRT) data? Is the resolution 375 m for all the pixels?
Some of the information on the fire clustering algorithm is also missing or not clear. How was the temporal grouping performed? How were the timing and the land cover of a fire determined? How did you verify whether a fire is real fire? See the ‘Minor comments’ below for detailed questions.
The approach to calculate ROS, which is the centerpiece of this study, also lacks important information. Sometimes the descriptions are contradictory or confusing (e.g. Which ROS statistics did you use? You mentioned maximum ROS, median ROS, and average ROS in the manuscript). The interpolation algorithm of vertices is also unclear to me.
2. Concerns about methodology
The VIIRS active fire data represent the center location of each pixel. The pixel size is ~375m at nadir and varies with scan angle (can be much bigger at the edge). The current fire shape algorithm used these center locations directly, without considering the detection uncertainty of the data. Can this influence the calculated ROS? In the spatial grouping of fire pixels, 5km rasters were used. This leads to possible merging of fire pixels with distances at a maximum of ~14km. Is this too conservative? How does the variation in this value affect the clustering and ROS? For the Alpha value, why did you use 1km? Should you estimate the uncertainty related to this value?
3. Statistical robustness
The other issue I’m concerned about is whether some of the analyses (and results) are statistically robust. The total number of fires you used for analysis in this manuscript is only 254. While this number may be sufficient for whole-regional statistical analysis, you further divided the fires into different seasons, land cover types, and countries. I doubt the sample number is enough for all the categories.
4. More analysis on ROS
It’s good to see the ROS variations across different land covers and seasons. But I expect the authors to do more analysis to support the usefulness of the dataset. Some examples include, but not limited to, the relationships between burned area increment and ROS; the influences of weather variables on ROS; the statistical relationship between ROS and fire size.
Minor comments;
Line 23: “suggesting that may present the extent of the fire season”
What does the ‘that’ refer to? May change to something like ‘this period’ or ‘these months’.
Line 38: “Moritz et al. ((Moritz et al., 2012)”
Some citations (such as this example) are not formatted correctly.
Lines 39-40: “in the last quarter of the 21st century (2070–2099)”
2070–2099 has 30 years and is more than a quarter of a century.
Lines 117-118: “its higher spatial and temporal resolution compared to other satellites such as MODIS”
Some satellites have higher spatial or temporal resolution than VIIRS. Need additional defining words for ‘other satellites’
Lines 134: “VIIRS detections are points scattered in time and space”
This is not quite true. A location record in the VIIRS fire data file does not represent the exact burning location, which could be anywhere within a pixel of the VIIRS footprint (which also varies with scanning angle).
Line 143-145: “The clustering in time was conducted by ordering the space clusters by time and creating divisions or break points if there was a time difference greater than 48 hours in between consecutive points.”
The clustering method at the temporal axis is not clear for me. How did you determine which space clusters should be tested temporally? What do ‘consecutive points’ mean? Individual fire center locations, or the 5km pixels?
Line 174-176: “To increase the accuracy of the spread vectors, the number of vertices at each polygon and time step was increased by linear interpolation between neighboring points.”
The algorithm of RoS is a center piece of this study, and needs to be clearly described. For example in this sentence, please be more specific about the condition (e.g., when the distance between two consecutive vertices exceeds X m…) when the linear interpolation should be performed; and more detailed description about the interpolation (e.g., number of vertices every X m…).
Line 183-184: “Copernicus Land Monitoring Service’s Corine Land Cover Map 2018 ((2019a)) to distinguish landscape fires from other heat sources such as active volcanoes, artifacts of heated plumes”
Can the Copernicus Land Cover Map be used to distinguish volcanic eruptions from fires?
Line 192: “As each timestep also featured data on land cover”
What land cover product did you use for this purpose? Still Copernicus Land Monitoring Service’s Corine Land Cover Map 2018? Please also describe how you determined the land cover type for each fire when the fire is big enough to cover different land cover pixels.
Line 198: “ANOVA and Tukey statistical analysis”
This statistical method may not be familiar to many readers. Please add a reference.
Line 204-205: “of which 254 were verified to be “real” landscape fires”
Please specify the details about the way you verified the real fires.
Line 209: “timing of the fire, the burnt area, the land cover, and the maximum ROS”
For a fire covering multiple raster pixels and time steps, how did you determine the ‘timing of the fire’ and ‘the land cover’ for the whole fire?
Line 219-221 : “On the other hand, fires less than 0.01 km2 were rarely detected with our satellite-based analysis, comprising approximately 0.002% of the total burned area and 1% the total number of fires. Fires between 0.01–0.1 km2 were also seldom observed with 0.3% of the burnt area set by 10.2% of total fires.”
In the method section (Line 136), you mentioned you “filtering out clusters with less than 20 VIIRS hotspots”. This filtering will reduce the number of small fires (in <0.01km2 and 0.01-0.1km2 bins) for certain. So the fraction of the number of fires in different size groups can be artificial.
Line 229: “It was during this period that the median ROS was the greatest”
In the Method section (Line 192), you said you used ‘maximum ROS’, here you said you used ‘median ROS’. Did you calculate the maximum ROS for a single fire (at a single time step, or for all time steps?), and then calculate the median ROS from all fires? The description needs to be clear in the Methods section.
Line 278-280: “The lack of fires smaller than 1 km2 can likely be explained by the fact that the VIIRS satellite was unable to capture fires of this magnitude due to limitations of the temporal and spatial resolution.”
Again, is this because you filtered fires with less than 20 VIIRS hotspots?
Line 330: “our study did not yield any significant effect of land cover on ROS”
This conclusion is not consistent with that shown in Figure 5, where we can see the obvious differences in the RoS for different land cover types.
Line 369: “lie within the methodology implemented, which produced average spread rates. “
Now you say ‘average spread rates’. So it’s not the ‘maximum ROS’ you mentioned in line 192?
Figure 1. The caption says “b) VIIRS hotspots retrieved from the area of interest”. But I didn’t see hotspots in this panel. I only see land cover types shown on the map.
Figure 4. Considering there are only 254 fires in total, the number of samples in each country-month bin is expected to be small (It’s also good to show this number in the Figure). The statistical robustness needs to be addressed.
Figure 5. What are the ‘n’ values referred to? I don’t think they are numbers of fires, since the total is way above 254 (the total fire number).
-
RC2: 'Comment on nhess-2022-47', Anonymous Referee #2, 25 Apr 2022
The preprint "Characterizing the Rate of Spread of Wildfires in Emerging Fire Environments of Northwestern Europe" by Mario Tapia et al. presents a systematic investigation of wildfire rate of spread (ROS) derived from VIIRS 375 m fire products. The article is well-presented and overall clearly written.
The authors propose a methodology to formalize the quantification of a fire behavior variable which, undoubtedly, is frequently estimated in an ad-hoc manner by users of fire detection data, specifically from the fire and natural hazard community, when managing a specific fire event. The authors' approach is potentially suitable to use as the basis for developing a remotely sensed product of interest to the user community. As such, the work is innovative and of undeniable interest. As the article's area of interest is north-western Europe - not a region known for very large or disruptive wildfires and therefore, in the light of climate change induced greater expected future prevalence of the wildfire hazard - it also contributes to enhanced understanding of the fire regimes in this part of the world.
Notwithstanding these strengths in scientific significance and quality of the presentation, I perceive a certain number of weaknesses that should be addressed before the manuscript is accepted for publication.
- Definition and structure of the study area (2.1). To the reader who is not immersed into the study of this area, the choice of study area appears at least somewhat arbitrary. Was the intent here to study an area of Europe somewhat under-represented in the study of wildfire, and therefore to apply boundaries so as to stay clear of the Mediterranean region in the south and the Scandinavian/boreal region in the west and north? The eastern boundary and the choice of the 49th parallel should be better justified. If this is a commonly studied area thus delineated, a citation should be added. This point may appear as a formality, but I believe it is more significant than that, especially when it comes to the statistics presented for the countries outside the British Isles. For example, a quick look into German fire statistics shows that, contrary to the findings presented here, wildfire activity tends to peak in the month of August. It also shows, however, that German wildfires are dominated by fire events in the Land of Brandenburg, which is cut in half by the eastern border of the study area here. Given this kind of limitation, and the extremely small sample size of fires outside the British Isles, I do not think that per-country statistics (3.3 and Fig 4) should be presented for the countries other than the British Isles.
- VIIRS data description, limitations, pre-processing and exploratory statistics. Section 2.2 needs to clearly describe which VIIRS product was used (I presume VNP14IMGTDL_NRT), and also confirm that the study is based only on S-NPP VIIRS data (no NOAA-20 data, which would duplicate the data record in the last year or so). Given that the filtering for retained fire detections ("real" fires) ended up rejecting ~90% of fire clusters, it is odd that clustering happened before filtering. The filtering criteria are also not very clear. A cleaner approach would have been to filter by land cover type (or, potentially, by using available GIS data of nature preserves, forested areas etc.) first and then cluster the remaining events. Regardless, it would be instructive to see some minimal exploratory statistical description of the retained fire events - how many by year? By land cover type? Their final number - 256 - is very small compared to the known fires in this area over the 9 years of the study time. This is to be expected as it is known that VIIRS misses many detections. But this fact is a rather relevant limitation of the study, which needs to be discussed. As-is, it seems likely that the results are dominated by particularly large fire years in specific sub-areas, which may very well skew the ROS statistics presented in the results. For example, the 2019 peatland fires in Scotland and Northern England may account for a rather outsized part of the results.
- Algorithm description. In my view, the chief interest of this work is the ROS vector generation algorithm. More effort should be deployed to describe its strengths and limitations. For example, in section 2.4 and Fig. 2, the fire detections are not points, but VIIRS pixels of at least a size 375 x 375 m (or substantially larger if the acquisition is off-nadir). The VIIRS data includes complementary information (which may include x and y pixel extent, depending on the product used, and does include a confidence rating) - was this information used in any way and how stable are ROS derivations to this. Also, fire spread has an extremely strong diurnal pattern, so the reporting of spread km/h is a value that has undergone averaging. In Fig 2 you present an example with ~14 h between successive acquisitions, but VIIRS overpasses can re-image the same spot with an interval of 90 min or up to several days, and the ROS values you would obtain would be radically different given the diurnal variation. At the very least you should report the distribution of delta-t values used for ROS calculations, and possibly apply a correction factor based on expected temporal fire activity patterns.
Some more localized comments:
16/17: Given the substantial statistical limitations of the study, I think that this sentence overstates the amount of insight gained for understanding of fire regimes.
31: An anomaly is probably an understatement. There is a long record of fire use for lanscape management by successive human populations.
34-36: The increasing peatland megafires should probably be mentioned here, especially since my suspicion is that they dominate the dataset this study is based on.
39: Moritz et al. (2015) reference needs to be re-formatted.
45/46: What about iginitions?
90: The capitalization of n/Northwest/ern Europe should be unified.
120: Extraneous semicolon.
Figure 1: Please revise the legend. It should indicate the origin of the land cover classification. Also, the hotspots - or hotspot clusters? - are in subfigure a), not b).
101/102: The capitalization choice "northern Atlantic Biogeographical region" is odd here and in the following.
137-143 [re: spatial clustering] There are other algorithms that also do not require cluster centers and number of clusters to be indicated a priori. With about 40,000 detections per year this is not a data volume that would be a problem for example for a variant of DBSCAN. Not that the outcome is going to be very different, but the clustering methodology comes across as somewhat clunky. The 5 km and 20 detections threshold aren't very well justified. (Also, where these distances measured in a projected coordinate system, that is, was the whole dataset reprojected, and if yes to which coordinate system?) Later, in the Results section, there is insufficient reporting on the impact of the parameter choice in clustering on the final dataset of fire events.
153-159: Missing references for these methodologies. (Also, a diagram would have been helpful.)
163: Whenever the word heuristically is used, there should be a justification of the heuristics being applied and ideally an estimate of the uncertainty involved.
Figure 2: The labels a) and b) are not clearly applied. The yellow points appear in b) only. There are no yellow polygons. The VIIRS fire detections at such high resolution should not be visualized as points as they are at least 375 x 375 m in extent.
175/176: There is no description of the final step of the algorithm, that is, the selection of one final vector. Is it the one of maximum length, some sort of average, a Gaussian model? What drove the choice of method, and what is the variability of the outcome? It seems to me that each ROS value should come with an uncertainty. As fire can grow in complex ways between successive detections, there is a need to report on what was found - and given the dataset was only 254 fires, case studies should be presented that show typical cases beyond Fig. 2 only.
179: These are not false detections. They are true detections of thermal anomalies that are not of interest to the study.
193-195: This sentence is unclear to me.
203-208: Section 3.1 should be expanded as a lot of questions remain open. These 254 fire events led to a substantially higher number of "fire spread timesteps". From Fig. 5 my guess is that their number was 758 or thereabouts. Did each of the 254 events contain at least one spread timestep? (If yes, that would be almost surprising - was there anything in the clustering methodology to ensure that each fire had at least two successive acquisitions?) How were the fire spread events distributed - my guess is that a small number of long-running fires dominate the fire spread events. A histogram would be helpful. Also, I miss a discussion of latitude effects on the likelihood of repeat fire detections (because of satellite orbital properties). The entire discussion in 3.2 and 3.3 is tainted if these biases aren't transparently described first.
Figure 3: The caption, and the preceding text, should make clear that this burnt area is not the same as that detected in remotely sensed burnt area products, or delineated in the GIS systems maintained by fire managers.
Figure 4: Fires were not detected contrary to the datasets made available by fire mangagement agencies (eg. https://www.ble.de/DE/BZL/Daten-Berichte/Wald/wald_node.html ) . This is understandable but needs to be discussed.
251 ff (3.4 and Fig. 5): The values of n vary wildly between the classes. So maybe classes could be grouped to generate similarly sized datasets. What do the error bars represent?
269: The authors should agree on one choice of spelling of burnt/burned area.
270 ff: The authors discuss some sources of biases (fire size), but should expand on the shortness of their dataset (only 9 years of fires) and how it can skew the results regarding fire activity timing.
286 ff: Not all fire management areas apply the same prescribed burn processes, and permitting is also not homogeneous. This paragraph should be shortened and moved to an earlier location in the manuscript, as it addresses a very minor point. There are some formatting issues with parentheses.
300/301: What sensor limitations?
341/345: The VIIRS 375 m is unlikely to detect any smoldering peat fires, so this is not surprising at all. You correctly state that what you're seeing is surface fires. The spread of the peat fire is entirely invisible to your methodology.
347: Formatting issue.
366ff: How are the low-to-moderate values you're getting impacted by averaging over a diurnal activity variation? Actual instantaneous spread may have been much faster.
401 ff: Please remove redundancy in the Conclusions section with what already has been said.
To conclude, in my view a quick, accurate and well-understood method to calculate VIIRS-based ROS for fire events would constitute a valuable and welcome contribution to the scientific record and toolset at the disposal of the fire management community. But the authors need to be careful to clearly describe the statistical limitations of their approach when it comes to statements about the NW-European fire regimes, and expand the presentation of the methodology itself.
Victor Mario Tapia et al.
Victor Mario Tapia et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
300 | 88 | 13 | 401 | 6 | 5 |
- HTML: 300
- PDF: 88
- XML: 13
- Total: 401
- BibTeX: 6
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1