Written by Evie Bellino
I. Introduction
As artificial intelligence (“AI”) becomes increasingly embedded in border control systems around the world, a troubling question emerges: are these technological tools helping governments manage migration more efficiently, or are they dangerously undermining the rights of some of the world’s most vulnerable populations?1See CBP Enhances Biometrics for Non‑U.S. Travelers Entering and Exiting United States, U.S. Customs & Border Prot. (Nov. 11, 2020), https://www.cbp.gov/newsroom/national-media-release/cbp-enhances-biometrics-non-us-travelers-entering-and-exiting-united; Steven Hubbard, Invisible Gatekeepers: DHS’ Growing Use of AI in Immigration Decisions, Am. Immigr. Council (May 9, 2025), https://www.americanimmigrationcouncil.org/blog/invisible-gatekeepers-dhs-growing-use-of-ai-in-immigration-decisions/. In the United States, the scale of the asylum process alone highlights the stakes.2See Nicole Ward & Jeanne Batalova, Refugees and Asylees in the United States, Migration Pol’y Inst. (June 15, 2023), https://www.migrationpolicy.org/article/refugees-and-asylees-united-states. As of December 31, 2024, over 1.4 million affirmative asylum applications were pending with U.S. Citizenship and Immigration Services.3See Immigration and Citizenship Data, U.S. Citizenship & Immigr. Servs., https://www.uscis.gov/tools/reports-and-studies/immigration-and-citizenship-data (refer to “USCIS Data Library”; select “Asylum” from the first dropdown box; then click search to see all relevant data). Yet in 2023, only 54,350 individuals were granted asylum.4See id. The use of AI in this context seems like a practical and efficient way to manage overwhelming caseloads and streamline these migration processes, however its deployment in such consequential environments warrants serious scrutiny. This blog post will outline the emerging use of AI in border enforcement systems and discuss how the use of this technology violates non-refoulement obligations and other foundational refugee protections of international law.5See infra Part II.
II. Artificial Intelligence at the Border
In the United States, the Department of Homeland Security has mandated the “CBP One” mobile application as the exclusive method of obtaining an asylum appointment at the southern ports of entry.6See Austin Kocher, Glitches in the Digitization of Asylum: How CBP One Turns Migrants’ Smartphones into Mobile Borders, 13 Societies 149, 8 (2023). Individuals are required to upload a selfie as a part of the app’s “facial liveness” test and verify their geolocation as proximate to a port of entry via GPS.7See CBP One Mobile Application Violates the Rights of People Seeking Asylum in the United States, Amnesty Int’l (May 9, 2024), https://www.amnesty.org/en/latest/news/2024/05/cbp-one-mobile-application-violates-the-rights-of-people-seeking-asylum-in-the-united-states/. The application effectively turns the individual’s phone into a digital border and is functionally responsible as to whether the individual can successfully make an appointment with Customs and Border Protection (“CBP”).8See Raul Pinto, CBP One Is Riddled with Flaws That Make the App Inaccessible to Many Asylum Seekers, Am. Immigr. Council (Feb. 28, 2023), https://www.americanimmigrationcouncil.org/blog/cbp-one-app-flaws-asylum-seekers/. CBP has defended the facial recognition component by claiming that it was implemented to prevent and reduce fraudulent behavior, but in practice, the test has become one of the main barriers for many users.9See Kocher, supra note 6. Numerous reports document that the facial recognition does not pick up the images of those who have darker skin tones.10See Amnesty Int’l, supra note 7. The application is disproportionately rejecting Black, Brown, and Indigenous populations at drastically higher error rates than those with lighter skin tones.11See Kocher, supra note 6. Reports from shelters in Mexican border cities indicate widespread rejection of photos from Black asylum seekers even when the photo lighting is enhanced.12See Pinto, supra note 8. CBP claims extremely high match rates across demographic groups, but their testing was conducted in sterilized and demographically skewed airports, not portraying the true picture of asylum camps.13See Michael Schuckers et al., Statistical Methods for Assessing Differences in False Non‑Match Rates Across Demographic Groups, in 3 Pattern Recognition, Comput. Vision & Image Processing 570–81 (Jean-Jacques Rousseau & Bill Kapralos eds., 2022). They also do not collect or disclose race or ethnicity in performance data, leaving the systemic bias and daily rejection of asylum seekers invisible to the statistics.14See id.
Across the Atlantic, the iBorderCtrl program pilot in Hungary, Greece, and Latvia tested lie detection using webcam interviews and micro-expression.15See Alberto Rinaldi & Sue Anne Teo, The Use of Artificial Intelligence Technologies in Border and Migration Control and the Subtle Erosion of Human Rights, Int’l & Comp. L.Q. at 1, 6 (2025). Participants were analyzed by their facial muscle movement to detect deception while they answered pre-scripted border questions.16See id. Studies have shown that because the algorithm was trained on white and emotionally neutral male faces, it systemically flagged trauma survivors, racial minorities, and women as being deceptive.17Denise Almeida, Konstantin Shmarko & Elizabeth Lomas, The Ethics of Facial Recognition Technologies, Surveillance, and Accountability in an Age of Artificial Intelligence: A Comparative Analysis of US, EU, and UK Regulatory Frameworks, 2 AI Ethics 377, 379 (2022). Human rights advocates and experts highly condemned this process as inherently discriminatory as those who are seeking asylum tend to be emotionally fragile and are likely to be dealing with trauma.18See id. The now defunct program shows the willingness of governments to prioritize efficiency, even at the expense of accuracy, transparency, and the dignity of asylum seekers.
The Convention Relating to the Status of Refugees of 1951, alongside the 1967 Protocol Relating to the Status of Refugees (the “Protocol”), form the cornerstone of international refugee protection.19See Convention Relating to the Status of Refugees, July 28, 1951, 189 U.N.T.S. 137; Protocol Relating to the Status of Refugees, Jan. 31, 1967, 606 U.N.T.S. 267. Originally adopted after World War II to protect individuals fleeing from persecution, the Protocol has expanded to provide a legal framework for refugee protections worldwide by outlining the qualifications of a refugee, their protected rights, and the obligations of states regarding refugees.20See Protocol Relating to the Status of Refugees, Jan. 31, 1967, 606 U.N.T.S. 267. The United States signed the Protocol on November 1, 1968, and later implemented the 1980 Refugee Act with the Immigration and Nationality Act (“INA”) to provide a systematic procedure for the admission of asylum seekers into the United States.21See Refugee Timeline, U.S. Citizenship & Immigr. Servs., https://www.uscis.gov/about-us/our-history/stories-from-the-archives/refugee-timeline (last visited Aug. 1, 2025); Refugee Act of 1980, 8 U.S.C. §§ 1101, 1157-59, 1521-25; S. Rep. No. 96-256, at 1 (1980), https://www.congress.gov/bill/96th-congress/senate-bill/643/text. Under Section 1231(b)(3) of the INA, the United States cannot return (“refouler”) an individual to a country where their life or freedom would be threatened in that country because of their race, religion, nationality, membership in a particular social group, or political opinion.22See Immigration and Nationality Act of 1952 (INA), 8 U.S.C. § 1231(b)(3).
As it currently stands, the INA provides inadequate protections to refugees due to its lack of direction in the use of technology, specifically AI, in asylum proceedings. Existing refugee law is largely silent on the role of technology. The INA also fails to address the issues that arise from integrating such technologies into the decision-making processes and how these machines are to be held accountable. These applications and machines discussed above can lead to indirect refoulement because they unfairly deny and reject asylum claims without due process or human judgment. These concerns are compounded by broader human rights obligations. It can be argued that foundational ideas of individuals guaranteed equality before the law without discrimination, and the right to effective remedy when rights are violated, found in the International Covenant on Civil and Political Rights (“ICCPR”), are violated through these processes.23International Covenant on Civil and Political Rights arts. 2, 14, 26, Dec.16, 1966, 999 U.N.T.S. 171.
The European Union has taken a notable step forward by addressing automated decision making and profiling in its legislation.24Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data and Repealing Council Directive 95/46/EC (General Data Protection Regulation), 2016 O.J. (L 119) 1, 1. The EU’s General Data Protection Regulation (“GDPR”) includes explicit restrictions on these issues, particularly in areas affecting individual rights.25See generally id. This approach should serve as a model for incorporating technology safeguards into international law treaties. By adding specific provisions into the INA on how to address the implementation of artificial intelligence into this sector, the United States can guarantee that refugee protections remain enforceable and relevant alongside the rapidly changing digital landscape.
III. Conclusion
Artificial intelligence undeniably holds potential for improving efficiency and streamlining certain administrative aspects of border management. This is especially true for a system that is overwhelmed by high volumes and urgent demand, but streamlining a high-stakes process cannot come at the expense of basic human rights. As governments continue to integrate artificial intelligence into immigration systems, they must also commit to building strong legal safeguards, ensuring transparency in algorithmic decision-making, and implementing meaningful oversight mechanisms. To guarantee the right of non-refoulement, the mandatory use of the CBP One application must be reevaluated. Its use is drastically limiting access to asylum in the United States and cannot be the exclusive manner of entry to those located at ports of entry around the border.
Moving forward, the United States should cease the use of facial recognition as the sole method within the application to become authorized to create an asylum appointment. There is no system of accountability for the biometric failures that are disproportionately affecting individuals based on issues entirely outside of the applicant’s control. Individuals with legitimate claims are being denied the opportunity to even begin the process of seeking asylum, effectively turning a digital barrier into a legal one.