Resumen
The dynamic development of deep learning methods in recent years has prompted the widespread application of these algorithms in the field of photogrammetry and remote sensing, especially in the areas of image recognition, classification, and object detection. Still, one of the biggest challenges in this field is the low availability of training datasets, especially regarding applications of oblique aerial imagery and UAV data. The process of acquiring such databases is labor-intensive. The solution to the problem of the unavailability of datasets and the need for manual annotation is to automate the process of generating annotations for images. One such approach is used in the following work. The proposed methodology for semi-automating the creation of training datasets was applied to detect objects on nadir and oblique images acquired from UAV. The methodology includes the following steps: (1) the generation of a dense 3D point cloud by two different methods: UAV photogrammetry and TLS (terrestrial laser scanning); (2) data processing, including clipping to objects and filtering of point clouds; (3) the projection of cloud points onto aerial images; and (4) the generation of bounding boxes bounding the objects of interest. In addition, the experiments performed are designed to test the accuracy and quality of the training datasets acquired in the proposed way. The effect of the accuracy of the point cloud extracted from dense UAV image matching on the resulting bounding boxes extracted by the proposed method was evaluated.