Redirigiendo al acceso original de articulo en 20 segundos...
Inicio  /  Applied Sciences  /  Vol: 10 Par: 17 (2020)  /  Artículo
ARTÍCULO
TITULO

Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human?Robot Interface for Hazardous and Underwater Scenarios

Carlos Veiga Almagro    
Giacomo Lunghi    
Mario Di Castro    
Diego Centelles Beltran    
Raúl Marín Prades    
Alessandro Masi and Pedro J. Sanz    

Resumen

The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task?s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human?robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way.

 Artículos similares