Redirigiendo al acceso original de articulo en 18 segundos...
Inicio  /  Information  /  Vol: 14 Par: 10 (2023)  /  Artículo
ARTÍCULO
TITULO

CapGAN: Text-to-Image Synthesis Using Capsule GANs

Maryam Omar    
Hafeez Ur Rehman    
Omar Bin Samin    
Moutaz Alazab    
Gianfranco Politano and Alfredo Benso    

Resumen

Text-to-image synthesis is one of the most critical and challenging problems of generative modeling. It is of substantial importance in the area of automatic learning, especially for image creation, modification, analysis and optimization. A number of works have been proposed in the past to achieve this goal; however, current methods still lack scene understanding, especially when it comes to synthesizing coherent structures in complex scenes. In this work, we propose a model called CapGAN, to synthesize images from a given single text statement to resolve the problem of global coherent structures in complex scenes. For this purpose, skip-thought vectors are used to encode the given text into vector representation. This encoded vector is used as an input for image synthesis using an adversarial process, in which two models are trained simultaneously, namely: generator (G) and discriminator (D). The model G generates fake images, while the model D tries to predict what the sample is from training data rather than generated by G. The conceptual novelty of this work lies in the integrating capsules at the discriminator level to make the model understand the orientational and relative spatial relationship between different entities of an object in an image. The inception score (IS) along with the Fréchet inception distance (FID) are used as quantitative evaluation metrics for CapGAN. IS recorded for images generated using CapGAN is 4.05 ± 0.050, which is around 34% higher than images synthesized using traditional GANs, whereas the FID score calculated for synthesized images using CapGAN is 44.38, which is ab almost 9% improvement from the previous state-of-the-art models. The experimental results clearly demonstrate the effectiveness of the proposed CapGAN model, which is exceptionally proficient in generating images with complex scenes.

 Artículos similares

       
 
Muhammad Abi Berkah Nadi, Sayed Ahmad Fauzan     Pág. 1 - 9
Recovery efforts following a disaster can be slow and painstaking work, and potentially put responders in harm's way. A system which helps identify defects in critical building elements (e.g., concrete columns) before responders must enter a structure ca... ver más

 
Ana García Serrano, Jorge Horcas Pulido,, Fernando López Ostenero     Pág. 26 - 36
In this paper it is presented a study on verbs in Spanish and it?s potential to display images from the Wikipedia (Wikimedia). It is designed and developed an Information Retrieval model based on linguistic structures of verbs and an environment that all... ver más

 
Houaria ABED, Lynda ZAOUI     Pág. 97 - 113
Recent years have witnessed great interest in developing methods for content-based image retrieval (CBIR). Generally, the image search results which are returned by an image search engine contain multiple topics, and organizing the results into different... ver más

 
Héctor Andrés Melgar Sasieta, Fabiano Duarte Beppler, Roberto Carlos do Santos Pacheco (Author)     Pág. 381 - 389
This paper presents a model that aims to facilitate the visualization of the knowledge stored in digital repositories using visual archetypes. Archetypes are structures that contain visual representations of the real world that are known a priori by the ... ver más

 
Yongbo Liu, Peng He, Yan Cao, Conghua Zhu and Shitao Ding    
A critical precondition for realizing mechanized transplantation in rice cultivation is the implementation of seedling tray techniques. To augment the efficacy of seeding, a precise evaluation of the quality of rice seedling cultivation in these trays is... ver más
Revista: Applied Sciences