Resumen
Crowdsourcing is one of the spatial data sources, but due to its unstructured form, the quality of noisy crowd judgments is a challenge. In this study, we address the problem of detecting and removing crowdsourced data bias as a prerequisite for better-quality open-data output. This study aims to find the most robust data quality assurance system (QAs). To achieve this goal, we design logic-based QAs variants and test them on the air quality crowdsourcing database. By extending the paradigm of urban air pollution monitoring from particulate matter concentration levels to air-quality-related health symptom load, the study also builds a new perspective for citizen science (CS) air quality monitoring. The method includes the geospatial web (GeoWeb) platform as well as a QAs based on conditional statements. A four-month crowdsourcing campaign resulted in 1823 outdoor reports, with a rejection rate of up to 28%, depending on the applied. The focus of this study was not on digital sensors? validation but on eliminating logically inconsistent surveys and technologically incorrect objects. As the QAs effectiveness may depend on the location and society structure, that opens up new cross-border opportunities for replication of the research in other geographical conditions.