Publications, reports, and articles.

Responsible artificial intelligence in international migration management: Legal and practical considerations (2025)

By:
Posted on:
June 27, 2025

What is this research about?

This article examines the use of artificial intelligence (AI), including generative AI, in the management of international migration. It explores the legal and practical considerations for responsible AI implementation by governments, focusing on transparency, regulatory frameworks, and the protection of migrants’ rights.

The author argues that States should be more transparent about their use of AI in international migration management to increase trust, strengthen the rule of law, and ensure accountability. She reviews current advances in AI regulation and highlights the importance of adhering to international human rights law and introduces a framework to support States with the responsible implementation of AI in international migration management.

This research provides a timely, principled roadmap for governments and organizations seeking to implement AI responsibly in international migration management. Its emphasis on transparency, risk assessment, and human rights offers actionable guidance for a range of stakeholders.

This article is extracted from the IOM's Migration Policy Practice (Vol. XIV, Number 2, June 2025).

What do you need to know?

AI and generative AI are increasingly used by governments to streamline migration processing, detect fraud, and manage identity verification. Examples include Australia (fraud detection, document analysis), Canada (visa triage), and Germany (identity management, name transliteration, mobile data analysis). Not all States are transparent about their AI use, raising concerns about trust and accountability.

This article provides a legal and ethical analysis, linking practical AI deployment with international human rights obligations. It also offers a structured framework for responsible AI use, emphasizing the “do no harm” principle, risk assessment, and the need for transparency.

What did the researcher find?

Key Highlights:

  • Transparency is Essential: States should publicly acknowledge their use of AI in migration management to build trust and strengthen the rule of law.
    • “Transparency is widely recognized as a cornerstone of trust, and this applies equally to the use of AI in international migration management.”
  • Regulatory Developments: The European Union’s Artificial Intelligence Act classifies AI used in migration as “high-risk,” requiring strict compliance with data quality, impact assessments, and risk management.
  • Human Rights Considerations: Even when national security exceptions apply, States must comply with international human rights law, including privacy and non-discrimination.
  • Framework for Responsible AI: The research introduces a framework based on the “do no harm” principle, risk assessment, and legal obligations to guide States in responsible AI deployment.

What are some particularly interesting themes and outlier findings?

  • Transparency vs. Security: There is a lack of uniform transparency, with some States not disclosing their AI use at all. The tension between the need for transparency and national security considerations is a recurring theme. The author argues for a balanced approach that does not compromise migrants’ rights.
  • Variation in Regulation: Different countries and regions are at varying stages of AI regulation, leading to inconsistencies in how migrants’ rights are protected.
  • Potential for Harm: Some AI applications in migration (e.g., generative AI for procedural or preparatory tasks) may not be classified as high-risk, potentially leaving gaps in oversight. The framework highlights that harm from AI can be individual, collective, or systemic, emphasizing the importance of thorough risk assessments.

How can you use this research?

For Policymakers:

  • Adopt and adapt the proposed framework for responsible AI use in migration management.
  • Increase transparency about AI deployment to build public trust and ensure accountability.
  • Ensure all AI systems, even those not classified as “high-risk,” are subject to human rights safeguards.

For Practitioners (Migration Authorities, NGOs):

  • Advocate for clear communication with migrants about how AI is used in their cases.
  • Implement risk assessment processes to identify and mitigate potential harms.

For Academics and Researchers:

  • Use the framework as a basis for further research on AI ethics in migration.
  • Study the impact of transparency measures on public trust and migrant outcomes.

For International Organizations:

  • Promote harmonized standards for AI use in migration management.
  • Encourage States to align AI deployment with international human rights obligations.

What did the researchers do?

Methods and Approach:

  • The research is a legal and policy analysis, synthesizing recent regulatory developments (such as the EU AI Act and the Council of Europe Framework Convention).
  • It draws on case examples from Australia, Canada, Germany, and the EU.
  • The paper reviews academic literature on transparency, trust, and the “do no harm” principle in humanitarian and technological contexts.
  • No primary data collection (such as surveys or interviews) is reported; the analysis is based on policy documents, laws, and existing literature.

Stakeholders Consulted:

  • The research references government disclosures, international regulatory bodies, and human rights organizations, but does not specify direct stakeholder engagement.



    Discover more from Knowledge Mobilization for Settlement

    Subscribe to get the latest posts sent to your email.

    Summary

    This article examines the use of artificial intelligence (AI), including generative AI, in the management of international migration. It explores the legal and practical considerations for responsible AI implementation by governments, focusing on transparency, regulatory frameworks, and the protection of migrants’ rights.
    arrow-circle-upenter-downmagnifier

    Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)