Resumen
We live in an era where time is a scarce resource and people enjoy the benefits of technological innovations to ensure prompt and smooth access to information required for our daily activities. In this context, conversational agents start to play a remarkable role by mediating the interaction between humans and computers in specific contexts. However, they turn out to be laborious for cross-domain use cases or when they are expected to automatically adapt throughout user dialogues. This paper introduces a method to plug in multiple domains of knowledge for a conversational agent localized in Romanian in order to facilitate the extension of the agent?s area of expertise. Furthermore, the agent is intended to become more domain-aware and learn new information dynamically from user conversations by means of a knowledge graph acting as a network of facts and information. We ensure high capabilities for natural language understanding by proposing a novel architecture that takes into account RoBERT-contextualized embeddings alongside syntactic features. Our approach leads to improved intent classification performance (F1 score = 82.6) when compared with a basic pipeline relying only on features extracted from the agent?s training data. Moreover, the proposed RDF knowledge representation is confirmed to provide flexibility in storing and retrieving natural language entities, values, and factoid relations between them in the context of each microworld.