Resumen
The aim of the research was to evaluate the performance of smartphone depth sensors (Time of Flight Camera(ToF) and Light Detection and Ranging (LiDAR)) from Android (Huawei P30 Pro) and iOS (iPhone 12 Pro and iPAD 2021 Pro) devices in order to build a 3D point cloud. In particular, the smartphones were tested in several case studies involving the scanning of several objects: 10 building material samples, a statue, an interior room environment and the remains of a Doric column in a major archaeological site. The quality of the point clouds was evaluated through visual analysis and using three eigenfeatures: surface variation, planarity and omnivariance. Based on this approach, some issues with the point clouds generated by smartphones were highlighted, such as surface splitting, loss of planarity and inertial navigation system drift problems. In addition, it can finally be deduced that, in the absence of scanning problems, the accuracies achievable from this type of scanning are ~1?3 cm. Therefore, this research intends to describe a method of quantifying anomalies occurring in smartphone scans and, more generally, to verify the quality of the point cloud obtained with these devices.