Resumen
Conceptual models are an effective and unparalleled means to communicate complicated information with a broad variety of stakeholders in a short period of time. However, in practice, conceptual models often vary in clarity, employed features, communicated content, and overall quality. This potentially impacts model comprehension to a point where models are factually useless. To counter this, guidelines to create ?good? conceptual models have been suggested. However, these guidelines are often abstract, hard to operationalize in different modeling languages, partly overlap, or even contradict one another. In addition, no comparative study of proposed guidelines exists so far. This issue is exacerbated as no established metrics to measure or estimate model comprehension for a given conceptual model exist. In this article, we present the results of a literature survey investigating 109 publications in the field and discuss metrics to measure model comprehension, their quantification, and their empirical substantiation. Results show that albeit several concrete quantifiable metrics and guidelines have been proposed, concrete evaluative recommendations are largely missing. Moreover, some suggested guidelines are contradictory, and few metrics exist that allow instantiating common frameworks for model quality in a specific way.