"Citizens’ acceptance of artificial intelligence (AI) in public service delivery is important for its legitimate and effective use by government. Human involvement in AI systems has been suggested as a way to boost citizens’ acceptance and perceptions of these systems’ fairness. However, there is little empirical evidence to assess these claims. To address this gap, we conducted a pre-registered conjoint experiment in the UK regarding acceptance of AI in processing public permits: for immigration visas and parking permits. We hypothesise that greater human involvement boosts acceptance of AI in decision-making and associated perceptions of its fairness. We further hypothesise that greater human involvement mitigates the negative impact of certain AI features, such as inaccuracy, high cost, or data sharing. From our study, we find that more human involvement tends to increase acceptance, and that perceptions of fairness were less influenced. Yet, when substantial human discretion was introduced in parking permit scenarios, respondents preferred more limited human input.
We found little evidence that human involvement moderates the impact of AI’s unfavourable attributes. System-level factors such as high accuracy, the presence of an appeals system, increased transparency, reduced cost, non-sharing of data, and the absence of private company involvement all boost both acceptance and perceived procedural fairness. We find limited evidence that individual characteristics affect these results. The findings show how the design of AI systems can increase its acceptability to citizens for use in public services."
"The findings of this study contribute to existing debates in three main ways:
1) how human versus AI involvement in public service provision shapes its acceptance by citizens - in most scenarios, respondents did prefer processes with more human involvement, although these effects were relatively small compared to accuracy and cost considerations. Yet in specific contexts, such as local government parking permits based on demonstrable need, respondents showed a tendency to cap human involvement, favouring the algorithm. The nuances of public trust in different sectors of administration, from benefits allocation to parental support, may be key determinants here.
2) how technology can shape the relationship between citizens and states - results suggest resistance to the accumulation and sharing of citizens’ data—but we also show, in the context of other system-level characteristics, that accuracy seems to be more influential than data privacy.
3) the key mechanisms, both in terms of the experiences of individuals and features of AI, that underpin its acceptance - By testing empirically vital mechanisms and examining the intricate relationships between individual characteristics, including literacy about AI, and features of the AI systems, we have highlighted new variables in this domain. However, we only tested two mechanisms against a controlled set of AI choices, which might not capture the full range of possible reactions."
"Our study’s focus on the government’s role in granting permits limits its applicability to public service contexts beyond this domain. In cases such as education, social care, or police interventions, the balance between humans and machine involvement might be different. Nevertheless, many routine interactions with the government involve permit applications similar to the kinds we examined such that the findings are of broad relevance. We also note that our experimental setup allow us to produce findings using “complete” information about the AI systems in a tabulated format. In real-world scenarios, citizens may neither have access to such comprehensive information nor actively seek it out. In particular, we suggest that further studies should investigate not only citizens’ perceptions but also the effects of varying official communications about AI systems to citizens."