AI Habitat Mapping

This is 3D digital reconstruction created by aerial photogrammetry of stereo images from our drone used to precisely determine vegetation height. The 3D rendering is then overlaid with the original 3-color band 2D optical imagery to classify and map 21 habitat types, especially ecologically important riparian vegetation.

This past summer, YERC interns Grace Campbell and Auggie Tupper from Montana State University continued to develop an AI method to identify and classify specific habitat types and monitor their change over time. Using the latest publicly-available light detection and ranging (LiDAR) data from USGS and satellite imagery from NASA, they gathered ground-based data to train machine-learning models. Key to this effort is the use of LiDAR which precisely measures vegetation height. This 3D structure, when combined with high-resolution optical imagery from the Landsat 8/9 and Sentinel-2 satellite sensors, measures composition such as the types of trees versus shrubs versus grasses versus bare ground. Creating these models and updating them every year would allow long-term monitoring of impacts—both climate and land-use activities—for private landowners’ and agencies’ decision-making. Based on surveys of biologists and managers, their top need was free or low-cost access to annual and accurate maps of habitat types. This does not currently exist due the massive effort to create them. Due to increases in annual human and natural disturbance impacts, habitat maps quickly become obsolete.

LiDAR is essentially laser radar that can estimate height variations above the ground using pulses of light to map a 3D model. The pulses of light are emitted by a device on an aircraft, and then reflect or otherwise interact with the ground below the aircraft before being sensed by a scanner equipped to the same aircraft. Using the aircraft’s GPS, each pulse of light ‘return’ is a given height based on the time that it took for that pulse to reflect back to the scanner, and then a specific location is assigned to that height, before the entire dataset is mapped to create a 3D model of the area being scanned. Publicly-available LiDAR data of Park County, Montana and Yellowstone National Park (referred to as the northern Yellowstone area), combined with satellite imagery of those same locations, were used to create the ArcGIS dataset for training the model, by assigning polygons within a given area a certain habitat value based on the imagery and LiDAR data associated with that given polygon, that were then confirmed using past YERC vegetation datasets, as well as observations in the field.

This ArcGIS dataset was used to train an AI model in TensorFlow, an open-source machine learning toolkit, to create an automated workflow that can classify new and monitor changes to habitats from new 3D LiDAR and 2D optical imagery from satellites. Feeding the model new information each year about a given area results in an automated, efficient, and accurate classification of habitats in that area through the model and is also used to precisely measure change by direct comparison to the previous year’s map. This would eliminate the need for expensive field monitoring year-over-year within the ecosystem, freeing up resources from organizations responsible for monitoring and maintaining these habitats to be used elsewhere. YERC’s pioneering use of drones to measure vegetation like LiDAR could also be used in future years as height changes in addition to composition.

Here in Montana, the relationship humans have with natural and working landscapes is crucial for landowners and for wildlife and their habitats. However, these landscapes are rapidly changing due to impacts from climate (droughts, floods, fires, severe storms, insect outbreaks) and human land-use activities driven by population growth and market economies. This project presents an efficient way to better understand these changing landscapes and take action. We are grateful to the Cinnabar Foundation for their support of our habitat mapping project.

YERC Staff