← Back to Blog Drone mapping sensors comparison

The choice between LiDAR and photogrammetry for drone-based mapping and inspection is one of the most frequently debated topics in commercial UAV operations. Both technologies produce three-dimensional spatial datasets — point clouds, digital elevation models, and derived mapping products — but they do so through fundamentally different physical principles, with different strengths, limitations, and cost profiles. Selecting the right approach for a given application requires understanding not just the headline accuracy specifications but the practical performance differences that emerge in field conditions.

Photogrammetry derives 3D geometry by finding corresponding points across multiple overlapping 2D images taken from different vantage points and computing the 3D coordinates of those points through triangulation. The technique is theoretically simple but computationally intensive, and its accuracy depends on image overlap geometry, camera calibration quality, ground control point placement, and the textural richness of the surfaces being mapped. LiDAR generates geometry directly by measuring the time-of-flight of laser pulses reflected from surfaces — a direct physical measurement that does not depend on image texture and operates in conditions that would degrade photogrammetric performance.

How Photogrammetry Works in Practice

Structure from Motion (SfM) photogrammetry, the dominant computational approach for drone-based 3D reconstruction, processes overlapping image sets — typically captured with 70 to 85% forward overlap and 60 to 70% lateral overlap — through a pipeline that first identifies and matches feature points across images, then solves for camera positions and orientations that are geometrically consistent with the observed feature locations, and finally triangulates point positions from the solved camera geometry. The resulting sparse point cloud is densified through Multi-View Stereo (MVS) processing, which matches pixel-level image regions across multiple views to generate a dense 3D model of the surveyed surface.

Ground control points (GCPs) — markers placed at known survey coordinates on the ground before the flight — are the primary mechanism for transforming the photogrammetric model from its internally consistent but arbitrary coordinate system to a georeferenced coordinate system with real-world accuracy. The number, distribution, and quality of GCPs are the primary determinants of photogrammetric absolute accuracy. With well-distributed GCPs measured at centimeter level using RTK or static GPS survey, photogrammetric surveys can achieve absolute horizontal accuracy in the 1 to 3 centimeter range at altitudes of 60 to 120 meters AGL.

RTK and PPK (post-processing kinematic) drone platforms have reduced the dependency on manually deployed GCPs by recording precise camera position at the moment of each image capture. RTK/PPK photogrammetry can achieve horizontal accuracy of 2 to 5 centimeters without any ground control in optimal conditions, which eliminates most of the labor cost of GCP deployment for surveys where this accuracy level is sufficient. Vertical accuracy remains more sensitive to camera position quality and typically requires at least a small number of GCPs for centimeter-level vertical performance.

How LiDAR Works in Practice

Airborne LiDAR systems emit laser pulses — typically at rates of 50,000 to 1,500,000 pulses per second depending on sensor class — and record the precise time at which each pulse is emitted and when the reflection returns from the target surface. The measured time-of-flight, combined with the speed of light and the known orientation of the laser beam, computes the range from the sensor to the target. Combining range measurement with precise sensor position from onboard IMU and GPS produces a three-dimensional point with coordinates in a real-world reference frame.

The direct distance measurement principle gives LiDAR several properties that photogrammetry cannot match. First, LiDAR can record multiple returns from a single pulse — an important capability for penetrating vegetation canopy and recording both the vegetation surface and the underlying terrain. Second, LiDAR is active — it generates its own illumination — and is therefore unaffected by ambient lighting conditions or surface texture. Third, the point density from modern LiDAR sensors (50 to 300 points per square meter at typical UAV survey altitudes) is generally higher and more uniform than photogrammetric dense clouds, and is not dependent on image overlap or processing parameters.

The primary limitations of LiDAR are cost and weight. Survey-grade drone LiDAR systems capable of centimeter-level accuracy typically weigh 500 grams to 2 kilograms and cost $15,000 to $60,000 for the sensor alone, compared to photogrammetry camera systems in the $500 to $8,000 range. For applications where photogrammetry can achieve the required accuracy, the cost difference is difficult to justify. LiDAR's advantages become compelling where photogrammetry's limitations are operationally significant.

Accuracy and Point Density Comparison

Metric Photogrammetry (RTK/GCP) LiDAR (Survey-Grade)
Horizontal accuracy (typical) 1–3 cm (with GCPs) 1–3 cm
Vertical accuracy (typical) 1.5–4 cm (with GCPs) 1–2 cm
Point density (100m AGL) 100–400 pts/m² 50–300 pts/m²
Vegetation penetration Limited — surface only Good — multiple returns
Sensor cost (indicative) $500–$8,000 $15,000–$60,000
Processing time (1 km²) 2–8 hours (cloud) 30–90 minutes
Low-light capability Limited Full

Application-Specific Guidance

Open-area topographic surveys — construction sites, quarries, stockpile measurement, agricultural field mapping — are typically photogrammetry applications. The terrain is visible, well-lit, and textured; GCP deployment is straightforward; and the cost savings of photogrammetry over LiDAR are substantial at the survey volumes these applications require. RTK photogrammetry in particular has largely displaced ground-based total station surveys for construction earthwork measurement, delivering comparable accuracy at dramatically lower cost per hectare.

Forestry and corridor surveys under canopy cover are LiDAR applications. Photogrammetry can map the forest canopy surface accurately but cannot penetrate it to measure ground elevation or characterize understory vegetation structure. LiDAR's multiple-return capability allows simultaneous measurement of canopy top, multiple canopy layers, and ground surface in a single pass. Terrain models derived from LiDAR under dense forest canopy are essential for flood modeling, timber inventory, and pipeline corridor clearance verification where accurate ground elevation beneath vegetation is required.

Infrastructure inspection — bridges, power lines, communication towers, and similar structures — benefits from LiDAR's ability to generate precise dimensional measurements in three dimensions regardless of lighting conditions or surface texture. Painted metal surfaces, which are common in infrastructure, are challenging for photogrammetry because paint creates a relatively featureless texture that reduces feature-matching quality. LiDAR generates geometry from physical range measurement rather than image texture matching, and is unaffected by surface color or texture.

Urban 3D modeling presents a case where the hybrid approach is often optimal. Building facades, roads, and hardscape map well with photogrammetry, which captures color and texture information that LiDAR alone cannot provide. Vegetated areas within urban scenes, and locations where overhanging structures shadow the ground, benefit from LiDAR's canopy penetration and active illumination. Urban mapping programs increasingly fuse photogrammetric color texture with LiDAR geometric accuracy, using the strengths of each modality to compensate for the limitations of the other.

Processing Infrastructure Requirements

The computational infrastructure required to process photogrammetric and LiDAR datasets differs significantly in character. Photogrammetric processing is computationally intensive — particularly the MVS densification step — but can be distributed across cloud computing resources, and cloud-based photogrammetry services have made survey-scale processing accessible without dedicated local hardware. A typical 500-image photogrammetry dataset covering 20 to 30 hectares at 3 cm GSD requires 2 to 6 hours of processing on a modern cloud instance.

LiDAR processing is less computationally demanding per unit area and can often be completed on a standard laptop for survey-scale datasets. The primary processing steps — trajectory computation (combining IMU and GPS data to determine precise sensor position throughout the flight), point cloud generation (computing 3D coordinates for each return), and ground/vegetation classification — are well-established and largely automated in commercial LiDAR processing software. The main processing challenge is trajectory quality: LiDAR point cloud accuracy is directly dependent on IMU calibration quality and GPS signal continuity during the survey flight.

Key Takeaways

Conclusion

The LiDAR vs photogrammetry decision framework is less about which technology is better in an absolute sense and more about which technology's characteristics best match the demands of the specific application. Cost-sensitive, open-area survey applications with good imaging conditions will generally favor photogrammetry. Applications involving vegetation canopy, metallic or low-texture surfaces, challenging lighting, or the need for precise dimensional measurements of complex structures will favor LiDAR. Many of the most demanding programs use both.

As drone LiDAR sensor costs continue to decrease — following a trajectory similar to the cost reduction that made photogrammetry cameras accessible to mid-market operators over the past decade — the threshold at which LiDAR's capability advantages justify its incremental cost will continue to fall. Programs that build their mapping infrastructure now with modular, multi-sensor payload architectures will be positioned to adopt improved sensor technology as it becomes available without rebuilding their operational workflows.