"The drive for innovation, efficiency, and cost-effectiveness has seen governments increasingly turn to artificial intelligence (AI) to enhance their operations. The significant growth in the use of AI mechanisms in the areas of migration and border control makes the potential for its application to the process of refugee status determination (RSD), which is burdened by delay and heavy caseloads, a very real possibility. AI may have a role to play
in supporting decision makers to assess the credibility of asylum seekers, as long as it is understood as a component of the humanitarian context. This article argues that AI will only benefit refugees if it does not replicate the problems of the current system. Credibility assessments, a central element of RSD, are flawed because the bipartite standard of a ‘wellfounded fear of being persecuted’ involves consideration of a claimant’s subjective fearfulness and the objective validation of that fear. Subjective fear imposes an additional burden on the refugee, and the ‘objective’ language of credibility indicators does not prevent the challenges decision makers face in assessing the credibility of other humans when external, but largely unseen, factors such as memory, trauma, and bias, are present.
Viewing the use of AI in RSD as part of the digital transformation of the refugee regime forces us to consider how it may affect decision-making efficiencies, as well as its impact(s) on refugees. Assessments of harm and benefit cannot be disentangled from the challenges AI is being tasked to address. Through an analysis of algorithmic decision making, predictive analysis, biometrics, automated credibility assessments, and digital forensics, this article reveals the risks and opportunities involved in the application of AI in RSD. On the one hand, AI’s potential to produce greater standardization, to mine and parse large amounts of data, and to address bias, holds significant possibility for increased consistency, improved fact-finding, and corroboration. On the other hand, machines may end up replicating and manifesting the unconscious biases and assumptions of their human developers, and AI has a limited ability to read emotions and process impacts on memory. The prospective nature of a well-founded fear is counter-intuitive if algorithms learn based on training data that is historical, and an increased ability to corroborate facts may shift the burden of proof to the asylum seeker. Breaches of data protection regulations and human rights loom large. The potential application of AI to RSD reveals flaws in refugee credibility assessments that stem from the need to assess subjective fear. If the use of AI in RSD is to become an effective and ethical form of humanitarian tech, the ‘well-founded fear of being persecuted’ standard should be based on objective risk only."
Please take this short 7-question survey where you can tell us how we are doing and how we might do better. This survey is anonymous. Your feedback will be used to improve the KM4S.ca website. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)