Pre-disaster mapping with drones: an urban case study in Victoria, BC, Canada

We report a case study using drone-based imagery to develop a pre-disaster 3D map of downtown Victoria, British Columbia, Canada. This represents the first drone mapping mission over an urban area approved by Canada’s aviation authority. The goal was to assess the quality of the pre-disaster 3D data in the context of geospatial accuracy and building representation. The images were acquired with a senseFly eBee Plus fixed-wing drone with real-time kinematic/post-processed kinematic functionality. Results indicate that the spatial accuracies achieved with this drone would allow for sub-meter 10 building collapse detection, but the non-gimbaled camera was insufficient for capturing building facades.

observing elevation changes over buildings. Following the 2016 Kumamoto earthquake in Japan, Moya et al. (2018) detected collapsed buildings using pre-and post-earthquake light detection and ranging (LiDAR) digital surface models (DSMs). For each building, they calculated the average height difference between the DSMs and manually set a threshold value to detect collapse -this technique had a Cohen's kappa coefficient and overall accuracy of 0.80 and 93 %, respectively (Moya et al., 2018). Pre-event LiDAR data, however, can often be outdated, leading to false detections, or unavailable, especially in less-5 developed parts of the world. Post-event LiDAR data may be difficult to rapidly obtain. To address these operational challenges, drones are an alternative platform for acquiring 2.5D and 3D data, and when stored locally for emergency mapping, can be used to rapidly acquire data. Drone-derived aerial imagery, when paired with structure-from-motion (SfM) image processing software, can be used to generate sub-decimeter resolution orthomosaics, DSMs, and photorealistic 3D models in the form of colorized point clouds and textured meshes. 10 Drone-based mapping can also potentially support longer-term needs assessments and reconstruction monitoring by surveying building damage levels. The traditional 2D approach with satellite imagery only provides information about building roofs and nearby debris, and previous research has shown that oblique perspectives of building facades are valuable for discerning between lower grades of building damage (Kakooei and Baleghi, 2017;Masi et al., 2017). Previous studies have conducted drone-based 3D mapping of buildings following a disaster. The motivation is to complement ground-based building 15 damage assessments -cataloging the exterior damages in 3D can support the planning/prioritizing of subsequent, more thorough ground-based assessments (Vetrivel et al., 2018), and the planning/monitoring of reconstruction. Previous studies (e.g., Fernandez Galarreta et al., 2015;Cusicanqui et al., 2018) have reported that damage features such as deformations, cracks, debris, inclined walls, and partially collapsed roofs are identifiable in drone-based 3D point clouds and mesh models.
These findings demonstrate that drone 3D data are capable of supporting post-disaster activities. However, previous studies 20 have been limited to drone-based 3D mapping: (i) a single building (Achille et al., 2015;Meyer et al., 2015), (ii) small, historic villages (Vetrivel et al., 2015;Dominici et al., 2017;Calantropio et al., 2018;Cusicanqui et al., 2018;Vetrivel et al., 2018), or (iii) modern cities, but without focus on the quality of building representation in the 3D data (Cusicanqui et al., 2018;Vetrivel et al., 2018). It is important to understand how drone-based 3D data would reconstruct a cityscape, particularly with a gridbased survey to capture multiple city blocks in a single flight. This flight pattern would balance areal coverage with 3D 25 reconstruction quality. The dense spacing of buildings and the presence of high-rises in an urban scene create considerable potential for camera occlusion and may result in 3D mesh defects such as inaccurate shapes, holes, and blurred textures (Wu et al., 2018).
In addition to issues with photogrammetry, it is challenging to collect drone data over dense, urban areas due to airspace regulations that were designed to protect public safety. As such, in a disaster context, drone data over cities have 30 generally been collected in the post-disaster phases, when destruction is widespread and these data are in high demand. With historic emphasis on data collection in the post-disaster phases, it is important not to detract from pre-disaster mapping. Predisaster mapping not only provides baseline data from which to assess changes, but is also a crucial exercise that enables Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-318 Manuscript under review for journal Nat. Hazards Earth Syst. Sci. Discussion started: 26 November 2018 c Author(s) 2018. CC BY 4.0 License. emergency management actors to establish operational protocols to maximize the effectiveness of drones in emergencies.
These protocols pertain to drone hardware/software, data collection, data processing, and data analysis.
We present a case study of pre-disaster mapping with a drone in Victoria, British Columbia, Canada. Victoria has at least a 30 % probability of experiencing a significantly damaging earthquake in the next 50 years (AIR Worldwide, 2013). A 2016 report on the seismic vulnerability of Victoria conducted a risk assessment for all buildings (13,330 buildings) in Victoria 5 under various earthquake scenarios and levels of ground shaking (VC Structural Dynamics Ltd, 2016). The report concluded that 30 % of the buildings (3,936 buildings) have a high seismic risk, meaning they have at least a 5 % probability of complete damage in a 50-year period (VC Structural Dynamics Ltd, 2016). This pre-disaster mapping exercise was undertaken for City of Victoria's Emergency Management Division and in partnership with GlobalMedic, a Canadian disaster relief charity. This was the first Transport Canada-approved drone flight over a major Canadian city. We were restricted by regulations to use a 10 specific platform, a 1.1 kg senseFly eBee fixed-wing drone. The overarching goal of this case study was to assess the quality of the drone data that we were able to obtain in a manner adhering to federal regulations.

Objectives
The first objective was to assess the geospatial accuracy of the drone data. Geospatial accuracy is important for change detection applications, as it relates to the quality of registration between pre-and post-disaster datasets. This was done by first 15 assessing the vertical accuracy of the drone DSM using 339 airborne LiDAR checkpoints. Then, a LiDAR DSM was subtracted from the drone DSM to visually assess the horizontal alignment of rooftops as a qualitative measure of horizontal accuracy.
The second objective was to assess the quality of 3D building representation. The only legally approved drone for this flight presents challenges for 3D mapping of cities, as it is a fixed-wing drone with a non-gimbaled camera. Research has shown that high camera tilt angles, which are not achievable with the regulatory platform for this flight, will result in higher reconstruction 20 density (less data gaps) and precision of points on building facades than lower camera tilt angles (Rupnik et al. 2015). The quality assessment of 3D building representation was done by visually assessing the drone 3D textured mesh, and using Google 3D (i.e., "3D Buildings" layer in Google Earth) as a reference for building appearance. Additionally, we applied a method previously used on post-disaster, drone-derived 3D point clouds to quantify data gaps on sample building facades.

Regulatory background 25
Transport Canada is the aviation authority that regulates drone operations in Canadian airspace. The current regulations require case-by-case permission for drone flights in urban areas. Permission is sought by submitting an application for a Special Flight Operations Certificate, where the applicant must demonstrate sufficient ground/flight training, standard operating procedures, emergency procedures, drone maintenance procedures, and more. Additionally, coordination with air traffic control (Nav Canada) was required to perform the flight, as downtown Victoria is within controlled airspace, with nearby airports, heliports, 30 and seaplane bases causing high-density air traffic. The only approved drone for this flight was a senseFly eBee, of which the "Plus" model was used for its higher georeferencing accuracy. The senseFly eBee Plus is a 1.1 kg, 1.1 m wingspan, fixed- wing drone made of lightweight expanded polypropylene foam, carbon fiber, and composite materials. The two eBee models are the lightest on the list of compliant drones for Transport Canada, which includes drones meeting federal safety and quality standards. For this flight, the senseFly eBee drone was approved by Transport Canada due to its light weight and ability to glide to a landing.

Flight area
The drone flight covered a 1 km 2 area of downtown Victoria, BC, Canada. The western half and eastern half of the flight area covered parts of the Historic Commercial District (HCD) and Central Business District (CBD), respectively, resulting in image capture over a diversity of building types and heights. The HCD contains an undulating streetscape with low-to mid-rise, brick-and stone-facade buildings alternating between one and five stories, including boutique hotels, heritage buildings, 10 businesses, and offices (CoV, 2011). The CBD contains high-density, mid-to high-rise commercial and residential buildings (CoV, 2011). The building heights within the flight area ranged from 2-55 m, and street widths varied between 7-24 m.

Drone hardware and flight planning
A senseFly eBee Plus drone with real-time kinematic (RTK)/post-processed kinematic (PPK) functionality and senseFly SODA RGB 20-megapixel camera was used to collect imagery. The RTK/PPK image georeferencing capabilities of the drone 15 replaced the need for ground control points (GCPs), which are not practical to distribute and survey in an emergency mapping context. The PPK mode was used, with correction data obtained from the NRCAN Canadian Active Control System (Albert Head reference station, 10 km from flight area). SenseFly eMotion (v3) software (eMotion) was used to plan the flight. The flight was grid-based, composed of orthogonal flight lines running non-parallel with streets (i.e., approximately 45 ° offset).
The addition of perpendicular flight lines and the orientation of the grid were used to increase image coverage of building 20 facades. The imagery frontal and lateral overlap were set to 75 %, and the flight altitude was 120 m above ground level.

Drone image acquisition
The flight was conducted on June 14, 2018. The operations took place in the morning for increased safety, i.e., low air traffic.
However, the ideal flight time would be solar noon to minimize shadows from buildings. The ground control station was set up on a parkade rooftop within the flight area. The parkade, surrounded by relatively low buildings and an open courtyard, 25 allowed for unobstructed takeoff/landing, visual line of sight, and radio signal between the drone and ground control station.
A total of 828 oblique images were captured, with pitch angles averaging 7 ° off nadir.

Image processing
The images were processed using a high-performance computer (Intel® Core™ i9-7900X CPU @ 3.30 GHz with 64 GB RAM and NVIDIA GeForce GTX 1080 GPU). First, eMotion was used for PPK processing by incorporating raw GNSS observations from the reference station and drone to refine the image geotags. The geotagged images were processed using Pix4Dmapper Pro (v4.3.27) (Pix4D), a structure-from-motion multiview-stereo (SfM-MVS) software. SfM-MVS generally consists of the 5 following steps. First, computer vision algorithms search through each image to identify "features" -that is, pixel sets that are robust to changes in scale, illumination, and 3D viewing angle (Westoby et al., 2012). Next, the features are assigned unique "descriptors", which allow for the same features to be identified across multiple images, and for the images to be approximately aligned (Westoby et al., 2012). This initial image alignment is iteratively optimized via bundle adjustment algorithms, the output of which is a sparse 3D point cloud of feature correspondences (Westoby et al., 2012). Multiview-stereo algorithms 10 then densify the sparse point cloud, typically by two or more orders of magnitude (Westoby et al., 2012). The dense point cloud is then used to generate a 3D textured mesh, which is a triangulated surface that is textured using the original images.
The dense point cloud is also used to generate a DSM. The DSM and images are used to generate an orthomosaic.
For the first objective of assessing the geospatial accuracy of the drone data, five DSMs were generated. Each DSM had increasingly computationally intensive parameters, resulting in an increasingly higher processing time. These various 15 combinations were used to assess the differences in vertical accuracy achieved with "rapid" and "slow" processing, ranging in total processing time from 0.50-8.14 h. This comparison has important implications on the applicability of a drone-based DSM for rapid building collapse detection, where time is a major factor. Four "rapid" DSMs were generated in Pix4D using values of 1/8, 1/4, 1/2, and 1 for the image scale parameters (Step 1: keypoints image scale and Step 2: image scale), and low density for the point cloud. One "slow" DSM was generated using a value of 1 for the image scale parameters, and optimal (medium) 20 density for the point cloud. All 5 DSMs were generated using 3 minimum matches, noise filtering, "sharp" surface smoothing, and inverse distance weighting interpolation. For the second objective of assessing 3D building representation, the 3D textured mesh was generated in Pix4D using a value of 1 for the image scale parameters, optimal (medium) density for the point cloud, 3 minimum number of matches, and high resolution for the textured mesh. A medium-resolution mesh was also generated for comparison to the high-resolution mesh.

Geospatial accuracy assessment
To be useful for change detection, such as DSM differencing for building collapse detection (e.g., Moya et al., 2018), the drone data must be geospatially accurate. Otherwise, misregistration of the drone data with pre-or post-event data may cause false detections. Therefore, the geospatial accuracy of each drone DSM was assessed using a 2013 LiDAR point cloud as a reference dataset. The vertical accuracy assessment was conducted using recommendations from the 2015 American Society for 5 Photogrammetry and Remote Sensing (ASPRS) Positional Accuracy Standards for Digital Geospatial Data (ASPRS, 2015).
ASPRS (2015) note that kinematic checkpoints (surveyed from a moving platform) can be used as supplemental reference data, but static checkpoints should be used for the main accuracy assessment. Due to unavailability of ground survey data, the accuracy assessment used LiDAR data as the reference. The LiDAR data were acquired in 2013, had an average point spacing of 0.31 m, and had the same spatial reference as the drone data. The vertical error of each drone DSM was calculated using 10 LiDAR checkpoints located on rooftops only, since the motivation was to assess the usability of the drone DSM for building collapse detection. To extract checkpoints from the LiDAR point cloud, first, 5000 points were randomly subsampled using CloudCompare (v2.9.1). From those 5000 points, only points corresponding to rooftops were retained. To avoid selecting checkpoints on rooftops that were not present during the 2013 LiDAR data collection, a 2013 satellite image was viewed in Google Earth and compared to the drone orthomosaic to determine buildings common to both datasets. ASPRS (2015)  15 recommend vertical checkpoints be located on flat or uniformly sloped (≤ 10 % slope), open terrain, away from vertical artifacts and abrupt elevation changes. Therefore, the final selection of LiDAR checkpoints included only those on flat rooftops and away from edges and roof objects. A total of 339 LiDAR checkpoints were retained. Each LiDAR point z-coordinate (zLiDAR) was subtracted from the corresponding drone DSM value (zdrone) to calculate errors (zdrone -zLiDAR). A Shapiro-Wilk test (α level of 0.05) and a visual inspection of the histogram, normal Q-Q plot, and box plot indicated that the errors followed a 20 normal distribution. Therefore, vertical accuracy was calculated as the vertical root mean squared error (RMSEz) following Eq. (1): where ( ) is the value of the ith cell from the drone DSM, ( ) is the z-coordinate of the corresponding LiDAR point, and the total number of observations is represented by n (ASPRS, 2015). To visually assess the horizontal accuracy of the 25 drone data, a DSM of difference (DoD) was generated. First, the LiDAR point cloud was interpolated into a 0.31 m DSM in ESRI ArcMap (v10.5.1) using inverse distance weighting interpolation and linear void fill. The LiDAR DSM was then subtracted from the "slow" drone DSM to calculate a 0.31 m DoD (DoD = DSMdrone -DSMLiDAR). The DoD was used to visually assess the horizontal alignment of roofs as a qualitative measure of horizontal accuracy.

Assessment of building geometry and texture
The medium-and high-resolution textured meshes were visually assessed for quality of building representation in terms of geometry and texture. Eight sample buildings ranging in geometrical complexity were segmented from each mesh using CloudCompare and were visually compared. Google 3D (i.e., Google Earth layer "3D Buildings") served as a reference for building appearance. The Google 3D layer was photogrammetrically derived using nadir and 45 ° aerial imagery that was 5 collected with a multi-camera system in 2014. To support the visual assessment, each sample building was segmented from the dense point cloud, and each building point cloud was colored by 3D point density using CloudCompare. To further investigate geometrical and textural distortions within the mesh, the dense point cloud was used to quantify data gaps on building facades (i.e., regions of facades without points). The procedure generally followed Cusicanqui et al. (2018), who assessed the completeness of drone-based point clouds of post-earthquake study areas in Taiwan and Italy. Using 10 CloudCompare, six sample facades were segmented from the dense point cloud. The Rasterize tool was used to project the points of each segmented facade onto a 0.50 m grid, with the projection plane parallel to the facade. Then, a 0.50 m raster was generated, showing the number of 3D points in each cell. For each raster, the percentage of facade data gaps was calculated by dividing the number of empty cells by the total number of cells. To support the data gap assessment, the sample facades were also segmented from the high-resolution mesh using CloudCompare. 15

Geospatial accuracy of drone DSM
The vertical error of each drone DSM was calculated using 339 randomly selected LiDAR checkpoints located on flat roofs and away from edges and roof objects. For the "slow" DSM, errors ranged from -0.09-0.20 m (Fig. 1a). The mean vertical error was 0.06 m, with a standard deviation of 0.04 m (Fig. 1a), demonstrating the drone DSM tended to overestimate the 20 elevation of rooftops. Table 1 shows the RMSEz values of the "slow" DSM and the 4 "rapid" DSMs. RMSEz decreased as total processing time increased (Table 1). The DSM generated in the least amount of time, 0.50 h, had an RMSEz of 0.16 m, which is 0.09 m higher than the RMSEz for the "slow" DSM, generated in 8.14 h ( Table 1). The horizontal accuracy of the "slow" DSM was visually assessed by calculating a DSM of difference (DoD = DSMdrone -DSMLiDAR) (Fig. 1b). The DoD shows blue tints for elevation overestimations and red tints for elevation underestimations by the drone DSM (Fig. 1b). A 2013 satellite 25 image was viewed in Google Earth and compared to the drone orthomosaic to determine buildings common to both datasets. Figure 1b identifies 16 buildings with large regions of contiguous DSM differences. These contiguous DSM differences are due to changes that occurred between the 2013 LiDAR and 2018 drone data acquisitions, such as new construction, structure removal, and parking lot excavation (Fig. 1b). For the rest of the DoD, the red and blue cells mostly correspond to changes in vegetation, and inconsistencies in building footprint edges between the drone and LiDAR DSMs (Fig. 1b). Building outlines 30 appear mostly blue, and don't appear weighted more heavily in one direction (Fig. 1b) of the drone DSM relative to the LiDAR DSM. With a 0.31 m average point spacing, it is possible the LiDAR point cloud did not sample roof edges, resulting in slightly smaller building footprints in the LiDAR DSM than the drone DSM. Building footprint edge differences could also be due to inaccurate geometry from drone-based photogrammetry.

Building representation: mesh resolution and data gap assessment
The appearance of buildings varied considerably between the medium-and high-resolution 3D meshes. Figure 2 shows eight 5 sample buildings represented by the dense point cloud (colored by 3D point density), both meshes, and Google 3D as a reference. Both meshes were generated using the settings described in § 2.4, with only the mesh resolution setting varying. For each building, the point density is higher on roofs than facades, and data gaps (i.e., regions of zero points) are visible within facades (Fig. 2). The medium-resolution mesh has visibly poorer reconstruction of building geometry and, subsequently, more deformations in texture than the high-resolution mesh (Fig. 2). This was expected, as each medium-resolution building contains 10 only 4-5 % of the vertices/faces of its high-resolution counterpart. Figures 2a-2d show heritage buildings with complex geometry: Victoria City Hall (Fig. 2a), St. John the Divine Anglican Church (Fig. 2b), Alix Goolden Performance Hall (Fig.   2c), and St. Andrew's Cathedral (Fig. 2d). Smaller architectural features common to these heritage buildings, such as gabled entrances, dormer windows, conical roofs, spires, and towers are better resolved in the high-resolution mesh (Fig. 2a-d). For these buildings, as well as buildings with simpler geometry (Fig. 2e-h), the high-resolution mesh shows higher linearity of 15 facade, roof, and window edges. For the high-rise buildings (Fig. 2e-h), facades with widespread data gaps in the point cloud appear to protrude inward and outward in the meshes, and have severe textural distortions. For generally planar facades with regular sampling (e.g., the front-facing facades in Fig. 2e and 2f), the apparent geometrical and textural differences between the medium-and high-resolution meshes are less prominent. The 95-96 % lower density of vertices/faces in the mediumresolution mesh appears more robust to geometrical/textural distortions for buildings with simpler, planar geometry than those 20 with complex geometry, provided there is adequate sampling. However, as demonstrated by the high-rise buildings (Fig. 2e-h), facades with widespread data gaps have severe distortions, regardless of mesh resolution.
Due to considerable improvements in building geometry and texture, the high-resolution textured mesh was assessed going forward. As demonstrated by the point-density point clouds in Fig. 2, roofs were more densely and regularly sampled than facades, and some facades contained widespread gaps that resulted in severe distortions in the meshes. To further assess 25 facade data gaps, particularly partial data gaps, six facades were segmented from the dense point cloud and high-resolution mesh. The 0.50 m point density raster and high-resolution mesh segmentation are shown for each facade in Fig. 3. Data gaps, represented by red cells, encompass 9-59 % of the facades (Fig. 3). For each facade, large regions of contiguous red cells in the point density raster appear attributed to distortions in the mesh (i.e., stretched texture and inwardly protruding geometry). Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-318 Manuscript under review for journal Nat. Hazards Earth Syst. Sci. Discussion started: 26 November 2018 c Author(s) 2018. CC BY 4.0 License.

Key lessons: drone geospatial accuracy and up-to-date, pre-disaster DSMs
For building collapse detection (e.g., Moya et al., 2018), drones can provide post-event DSMs that can be differenced with LiDAR or photogrammetrically derived pre-event DSMs. However, there are geospatial accuracy requirements to avoid artificial detections caused by the misregistration of pre-and post-event DSMs. As such, we conducted a vertical accuracy 5 assessment of each drone DSM using LiDAR checkpoints located on flat roofs only, as the goal was to assess the usability of the drone DSM for building collapse detection. Based on 339 checkpoints, the RMSEz of the "slow" drone DSM, generated in 8.14 h, was 0.07 m, and the RMSEz of the most "rapid" drone DSM, generated in 0.50 h, was 0.16 m (Table 1). To assess the implications of the vertical accuracies, a level of detection (LoD) can be calculated to determine the threshold elevation difference that can be detected using pre-and post-disaster DSMs with known RMSEz values, following Eq. (2): 10 where 1 is the RMSE of the pre-disaster DSM, 2 is the RMSE of the post-disaster DSM, and the multiplier, 3, represents the extreme tails of a normal probability distribution (Hugenholtz et al., 2013). Table 2 shows hypothetical DoDs, each generated with a different combination of pre-and post-disaster DSMs, and their resulting LoDs from Eq. (2). In Table   2, the RMSEz values for the "slow" and "rapid" drone DSMs were experimentally derived in this study. The RMSEz for piloted 15 LiDAR was experimentally derived by García-Quijano et al. (2008) (Table 2) Table 2). The RMSEz for a non-RTK/PPK drone was experimentally derived by Hugenholtz et al. (2016) (Table 2). Based on 180 RTK-GNSS vertical checkpoints from a gravel pit, Hugenholtz et al. (2016) calculated an 20 RMSEz of 2.144 m for a non-RTK/PPK senseFly eBee (no GCPs), and an RMSEz of 0.089 m for an RTK/PPK-enabled senseFly eBee (no GCPs) (similar to our RMSEz of 0.07 m). For each hypothetical DoD in Table 2, the corresponding LoD value indicates that any elevation difference between -LoD and +LoD is likely due to error and cannot be interpreted as real.
DoDs generated with one or more DSMs derived from a non-RTK/PPK drone (DoD5 and DoD6) had LoDs of 6.44 m and 9.10 m ( Table 2). For LoDs attributed to the use of non-RTK/PPK drones, buildings shorter than the LoDs cannot be assessed 25 for collapse, and for assessable buildings, only DoD values exceeding the LoDs are likely to correspond to real collapse. The 6.44 m and 9.10 m LoDs exceed the typical height of a single building story, suggesting that DoDs generated with non-RTK/PPK drones cannot be reliably used to detect partial collapse. Conversely, DoDs generated with one or more DSMs from an RTK/PPK drone (DoD1-4) had LoDs of 0.30 m and 0.52 m, suggesting that these DoDs can be reliably used to detect partial collapse (Table 2). This includes DoD3 and DoD4, both generated using a "rapid" post-event drone DSM (Table 2). 30 The use of the lowest image scale value (1/8) and lowest point density in Pix4D to generate the most rapid DSM in 0.50 h (Table 1) retained sub-decimeter LoDs (0.52 m) ( Table 2). These results demonstrate that RTK/PPK-enabled drones are required for reliable building collapse detection, and rapid processing settings can be used. Furthermore, the DoD showed buildings with large regions of contiguous DSM differences due to changes that occurred between the 2013 LiDAR and 2018 drone data acquisitions, such as new construction, structure removal, and parking lot excavation (Fig. 1b). This demonstrates that, for building collapse detection, it is necessary to maintain an up-to-date, predisaster DSM to avoid false detections. Victoria has an ever-changing downtown core, with rezoning and new developments to help accommodate significant population growth forecasted over the next 20-30 years (CoV, 2011). With constant new 5 construction throughout growing cities like Victoria, it is important for municipalities to regularly update DSMs, such that changes in construction are not masked as disaster-induced destruction.

Key lessons: drone mesh resolution and imaging platform
From an image processing standpoint, it was shown in § 3.2 that mesh geometry and texture are improved considerably from a medium-resolution to a high-resolution mesh (Fig. 2). This high-resolution mesh required more processing time, including 10 subsetting the project into two, but these improvements justify the added time for virtual 3D damage assessment applications.
For image collection, the deformed geometry and texture of buildings ( Fig. 2 and 3) could be improved by collecting highly oblique images of facades in addition to nadir images of roofs and ground features. The average 7 ° camera pitch angle used in this case study was likely insufficient for capturing vertical and near-vertical faces, resulting in large point cloud data gaps and geometrical/textural distortions in the 3D mesh ( Fig. 2 and 3). Using a higher camera angle (e.g., 30-45 ° off nadir) could 15 make important improvements, for deformations in the 3D model could be mistaken for building damage. Rupnik et al. (2015) found that increasing the camera tilt angle resulted in a higher point density on building facades and higher 3D precision of points, and that the addition of oblique images to a nadir image set increased the vertical accuracy of points. This suggests that different hardware is required for 3D mapping of municipalities with small drones. Options include multi-rotor drones with gimbaled cameras that are capable of highly oblique image capture. However, this challenges current regulations that only 20 allow lightweight, fixed-wing drones to be flown over municipalities. To comply with current regulations, a potential solution is to use a lightweight fixed-wing drone with a camera that tilts for oblique image capture. One commercially available option is the senseFly eBee X drone with a senseFly SODA 3D RGB camera, which captures one nadir image and two laterally oblique images per waypoint. Lightweight RTK/PPK-enabled multi-rotors may be more affordable than the senseFly eBee X with SODA 3D camera, but achieve a fraction of the flight time and areal coverage of fixed wings. 25 It is important to note that a higher camera angle is not a panacea -higher camera tilt angles result in higher occlusions due to surrounding buildings, which contribute to lower point density on lower parts of facades (Rupnik et al., 2015). Moreover, point cloud gaps will persist on facades due to several factors: (i) occlusions caused by surrounding buildings, facade protrusions, and other objects, (ii) insufficient texture, (iii) highly reflective surfaces like glass, and (iv) poor image quality (Fonstad et al., 2013;Alsadik et al., 2014). Another potential solution is to obtain images of building facades from the ground. 30 Wu et al. (2018) showed that drone-derived textured meshes of urban study areas in Germany and Hong Kong were improved with the integration of ground-based images. The meshes had increased geometric accuracy and improved texture (Wu et al., 2018). However, potential challenges to obtaining terrestrial images include added time, safety concerns, and limited access. Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018-318 Manuscript under review for journal Nat. Hazards Earth Syst. Sci. Discussion started: 26 November 2018 c Author(s) 2018. CC BY 4.0 License.

Conclusions
We presented a case study of drone-based pre-disaster mapping in downtown Victoria, BC, Canada. The objectives were to assess the quality of the data in terms of geospatial accuracy and 3D building representation. Using 339 airborne LiDAR checkpoints located on flat roofs, the RMSEz of the drone DSM was 0.07 m. The DSM of difference (DoD = DSMdrone -DSMLiDAR) showed complete roof overlap, suggesting adequate horizontal accuracy for change detection applications. For 5 building collapse detection, we devise drones with RTK/PPK image georeferencing capabilities and up-to-date, pre-disaster DSMs are required to avoid false detections. Furthermore, image processing using "rapid" settings, as opposed to "slow" settings, reduced processing time from 8.14 h to 0.50 h, increased DSM RMSEz from 0.07 m to 0.16 m, and increased DoD LoD from 0.30 m to 0.52 m. Though processing times were specific to our computing hardware, these differences demonstrate that "rapid" processing is capable of quickly generating DSMs that can reliably detect sub-meter building collapse. Conversely, 10 theoretical DoDs derived from one or more non-RTK/PPK drone DSMs have LoDs too high (i.e., > 6 m) to reliably detect partial building collapse. These results suggest that RTK/PPK-enabled drones and "rapid" image processing are most suitable for rapid building collapse detection with drones.
For virtual building damage assessment with drone-derived 3D textured meshes, it was shown that a high-resolution mesh, containing 95-96 % more vertices/faces than a medium-resolution mesh, visually improved building geometry and 15 texture, especially for heritage buildings with complex geometries and small architectural features. However, neither mesh resolution was able to cope with large point cloud gaps on building facades. These data gaps were shown to correspond with severely distorted geometry and texture in the mesh. Therefore, for future drone-based pre-and post-disaster 3D mapping of municipalities, different hardware would be required. The ability to capture highly oblique images is paramount for virtually reconstructing building facades. Options include a multi-rotor drone with a gimbaled camera. However, follow-up studies with 20 lightweight multi-rotor drones will not be possible without modification to existing airspace regulations. Therefore, we suggest a follow-up study with a senseFly eBee X with SODA 3D camera.    Figure 1. Geospatial accuracy results for the "slow" DSM: (a) vertical error histogram with statistics and Shapiro-Wilk (S-W) pvalue, and (b) DSM of difference, calculated by subtracting DSM LiDAR from DSM drone . Blue tints represent elevation overestimations and red tints represent elevation underestimations by DSM drone . Buildings with major contiguous DSM differences are boxed in black. The causes of these contiguous DSM differences are due to changes during the 5 years between LiDAR (2013) and drone (2018) data acquisition, including new construction (1, 2, 4-10, 12, 14-16), structure removal (3, 5, 11), and parking lot excavation (13). Nat. Hazards Earth Syst. Sci. Discuss., https://doi.org/10.5194/nhess-2018- Figure 2. Sample buildings segmented from the dense point cloud (colored by 3D point density), medium-resolution mesh, and high-resolution mesh. Both meshes were generated using identical input imagery and processing settings, except for the mesh resolution setting. Google 3D is shown as a reference for building appearance.   Figure 3. Sample building facades, each represented by a 0.50 m 3D point density raster, and a high-resolution mesh segmentation. Red cells within each raster represent data gaps (0 points per cell).