Resumen
To use technology or engage with research or medical treatment typically requires user consent: agreeing to terms of use with technology or services, or providing informed consent for research participation, for clinical trials and medical intervention, or as one legal basis for processing personal data. Introducing AI technologies, where explainability and trustworthiness are focus items for both government guidelines and responsible technologists, imposes additional challenges. Understanding enough of the technology to be able to make an informed decision, or consent, is essential but involves an acceptance of uncertain outcomes. Further, the contribution of AI-enabled technologies not least during the COVID-19 pandemic raises ethical concerns about the governance associated with their development and deployment. Using three typical scenarios?contact tracing, big data analytics and research during public emergencies?this paper explores a trust-based alternative to consent. Unlike existing consent-based mechanisms, this approach sees consent as a typical behavioural response to perceived contextual characteristics. Decisions to engage derive from the assumption that all relevant stakeholders including research participants will negotiate on an ongoing basis. Accepting dynamic negotiation between the main stakeholders as proposed here introduces a specifically socio?psychological perspective into the debate about human responses to artificial intelligence. This trust-based consent process leads to a set of recommendations for the ethical use of advanced technologies as well as for the ethical review of applied research projects.