Redirigiendo al acceso original de articulo en 18 segundos...
Inicio  /  Aerospace  /  Vol: 10 Par: 12 (2023)  /  Artículo
ARTÍCULO
TITULO

A Vision-Based Pose Estimation of a Non-Cooperative Target Based on a Self-Supervised Transformer Network

Quan Sun    
Xuhui Pan    
Xiao Ling    
Bo Wang    
Qinghong Sheng    
Jun Li    
Zhijun Yan    
Ke Yu and Jiasong Wang    

Resumen

In the realm of non-cooperative space security and on-orbit service, a significant challenge is accurately determining the pose of abandoned satellites using imaging sensors. Traditional methods for estimating the position of the target encounter problems with stray light interference in space, leading to inaccurate results. Conversely, deep learning techniques require a substantial amount of training data, which is especially difficult to obtain for on-orbit satellites. To address these issues, this paper introduces an innovative binocular pose estimation model based on a Self-supervised Transformer Network (STN) to achieve precise pose estimation for targets even under poor imaging conditions. The proposed method generated simulated training samples considering various imaging conditions. Then, by combining the concepts of convolutional neural networks (CNN) and SIFT features for each sample, the proposed method minimized the disruptive effects of stray light. Furthermore, the feedforward network in the Transformer employed in the proposed method was replaced with a global average pooling layer. This integration of CNN?s bias capabilities compensates for the limitations of the Transformer in scenarios with limited data. Comparative analysis against existing pose estimation methods highlights the superior robustness of the proposed method against variations caused by noisy sample sets. The effectiveness of the algorithm is demonstrated through simulated data, enhancing the current landscape of binocular pose estimation technology for non-cooperative targets in space.