Resumen
Drone imagery is becoming the main source of overhead information to support decisions in many different fields, especially with deep learning integration. Datasets to train object detection and semantic segmentation models to solve geospatial data analysis are called GeoAI datasets. They are composed of images and corresponding labels represented by full-size masks typically obtained by manual digitizing. GIS software is made of a set of tools that can be used to automate tasks using geo-referenced raster and vector layers. This work describes a workflow using GIS tools to produce GeoAI datasets. In particular, it mentions the steps to obtain ground truth data from OSM and use methods for geometric and spectral augmentation and the data fusion of drone imagery. A method semi-automatically produces masks for point and line objects, calculating an optimum buffer distance. Tessellation into chips, pairing and imbalance checking is performed over the image?mask pairs. Dataset splitting into train?validation?test data is done randomly. All of the code for the different methods are provided in the paper, as well as point and road datasets produced as examples of point and line geometries, and the original drone orthomosaic images produced during the research. Semantic segmentation results performed over the point and line datasets using a classical U-Net show that the semi-automatically produced masks, called primitive masks, obtained a higher mIoU compared to other equal-size masks, and almost the same mIoU metric compared to full-size manual masks.