Data Architecture Decides Asylum Claims Before Humans
Courts debate asylum bans, but a silent network of apps, face recognition, and biometric engines already processes border arrivals, with most of it never voted on.
bloomberg.com
In this article
Ana R., a 34-year-old mother from Venezuela, waited eight months in northern Mexico for an appointment slot on the CBP One app. When it came, she submitted her photograph, her geolocation, her family's biographical data, and a live facial scan. The appointment led to a parole entry, a work permit, and a removal notice. On April 1, 2026, a federal judge in Boston ruled that the Department of Homeland Security had illegally stripped immigration status from her and nearly 900,000 others who entered the United States through that same digital portal. Ana's name is a pseudonym.
That ruling, from U.S. District Judge Indira Talwani, arrived three weeks before a federal appeals court blocked President Trump's executive order suspending asylum access at the southern border entirely. The appeals court decision, reported by The Baltimore Sun and USA TODAY among others, affirmed a lower court's finding that the president cannot, by executive order, suspend a right codified in federal statute. But the legal rulings tell only part of the story. What sits between the policy fights and the people waiting at the border is a sprawling, layered data architecture that is quietly deciding who gets a credible-fear interview, whose face gets matched against scraped social-media photos, and whose data persists in government systems long after a court says the person should not have been removed from them.
The CBP One application, launched in January 2023 under the Biden administration, was the first piece of the stack that most asylum seekers encountered. To schedule an appointment at a port of entry, applicants submitted geolocation data, facial photographs, passport information, and detailed biographical questionnaires. The app's privacy notice, reviewed by researchers at the Electronic Frontier Foundation, disclosed that data could be shared with other DHS components, including Immigration and Customs Enforcement, and retained for up to 75 years in some record schedules. For migrants who lacked reliable internet, or whose phones could not run the facial-matching module, the app was not a convenience; it was a locked door.
That door opened into a bigger room. In February 2026, Wired reported that ICE and CBP had deployed a face-recognition application called Mobile Fortify, built by the Japanese technology firm NEC, to identify immigrants and U.S. citizens in the field. The app had been used more than 100,000 times, according to one government estimate, despite the fact that it had not been approved for identity verification. DHS's own privacy office had previously declined to authorize the tool for that purpose, citing accuracy concerns. The agency deployed it anyway, reclassifying it as an 'investigative lead' tool rather than an identification system. The distinction mattered only inside the compliance paperwork. To the person being scanned, the result was the same.
The same month, Wired reported that CBP had signed a $225,000 contract with Clearview AI for a year of access to the company's face-recognition database, built by scraping billions of images from social media platforms without consent. The stated purpose was 'tactical targeting.' No further public specification was provided. Clearview AI has been fined repeatedly by European data-protection authorities, including a €20 million penalty from France's CNIL and a £7.5 million fine from the UK's Information Commissioner's Office, for unlawfully collecting biometric data. Its U.S. government contracts continue to expand.
The asylum seeker who submits a photograph through CBP One does not know that their face may be run against Clearview's database, or NEC's, or the FBI's Next Generation Identification system. They are not told which algorithm processed their image, what error rate that algorithm carries for people with their skin tone, or whether a match flagged by the system was reviewed by a human before it became part of their file. This is not an oversight. It is the design.
The entire stack is built so that by the time a person sits down for a credible-fear interview, a dossier has already been assembled, often using tools that the person has no way to challenge., Petra Molnar, associate director, Refugee Law Lab, York University
Molnar, author of The Walls Have Eyes, a 2024 book on border surveillance technology, said that the shift is structural. 'What we are seeing is a shift from the interview as a site of legal assessment to the interview as a site where algorithmic determinations are simply ratified,' she said. The dossier that precedes the asylum seeker into the interview room includes not only biometric matches but also outputs from automated translation tools, which researchers at the University of California, Berkeley, have shown can introduce errors in critical phrases related to threats of violence or political persecution.
The data architecture extends well beyond face recognition. In February 2026, Wired reported that DHS was pursuing a unified biometric search engine that would combine face and fingerprint databases across multiple agencies into a single queryable platform. The plan would effectively dissolve the wall between civil immigration enforcement and criminal investigation databases, allowing a CBP officer at a port of entry to query fingerprint records collected by local police departments through shared federal interfaces. DHS proposed this consolidation after dismantling the centralized privacy-review mechanism that had previously evaluated biometric programs before deployment.
At the same time, Border Report documented how DHS contractors are using AI-powered skip-tracing tools to locate migrants accused of being in the country without authorization. These tools aggregate utility bills, social-media activity, motor-vehicle records, and commercial data-broker files to generate real-time location profiles. The companies providing the software, including LexisNexis Risk Solutions, sell similar products to debt collectors and private investigators. The same data supply chain feeds both the asylum-screening pipeline and the removal machine.
This is the asylum-decision stack in 2026. A person fleeing violence arrives at a border or a port of entry. Their face, fingerprints, and biographical data enter a system that cross-references them against scraped social-media imagery, commercial data-broker profiles, criminal-records databases, and prior immigration encounters. An algorithmic assessment generates a risk score or flags inconsistencies. That output precedes the credible-fear interview, shaping the questions the asylum officer asks and the skepticism with which the answers are received. If the person receives a positive credible-fear determination, they may still be enrolled in Alternatives to Detention, a GPS-monitoring program that tracks their movements until their case is adjudicated, sometimes for years. If they receive a negative determination, the same data architecture that processed their claim will be used to locate and remove them.
The legal rulings that bookended April 2026 did not address this architecture. Judge Talwani's order reinstating CBP One parolees focused on the procedural violation: DHS had terminated status without individual notice or a hearing. The appeals court's asylum-ban ruling, covered by The Baltimore Sun, addressed the president's statutory authority to suspend asylum, not the validity of the digital tools used to process claims. The courts are litigating the law while the data architecture expands underneath it, indifferent to who wins.
A small number of lawmakers have noticed. In February 2026, a group of Senate Democrats introduced the 'ICE Out of Our Faces Act', which would prohibit ICE and CBP from using facial recognition technology, Ars Technica reported. The bill has not advanced out of committee. Even if it passed, it would address only one layer of the stack. The GPS monitoring, the data-broker pipelines, the biometric search engine, and the automated risk-assessment tools would remain untouched. Building legislation around individual technologies, rather than the data flows they enable, guarantees that regulation will always lag behind procurement.
The question of consent runs through every layer and comes apart at each one. A person applying for asylum through CBP One consents to biometric collection because there is no alternative path to a port-of-entry appointment. They consent to data sharing because the privacy notice lists it and there is no opt-out. They consent to face matching because refusing means the appointment slot goes to someone else. When the EFF and the ACLU of Northern California filed a Freedom of Information Act request in 2025 seeking the full data-sharing agreements between CBP, ICE, and the commercial vendors supplying these tools, DHS released contracts so heavily redacted that the scope of inter-agency data flows remained invisible. The FOIA appeal is pending.
European regulators have moved, at least partially, to constrain similar architectures. In the EU, the AI Act classifies real-time biometric identification in publicly accessible spaces as a prohibited practice, with limited exceptions for law enforcement. The European Data Protection Supervisor has opened investigations into Eurodac, the EU's asylum-seeker fingerprint database, for potential violations of purpose limitation. But these frameworks are not self-executing, and Frontex, the EU's border agency, has sought to expand its use of drone surveillance and biometric screening at external borders. The Conversation documented a parallel spread of AI-driven border surveillance in West Africa, where European funding is helping to deploy systems that would face greater legal resistance on European soil. The architecture is mobile; the regulation is territorial.
What the Stack Costs to Challenge
For Ana R., the Boston ruling meant that her immigration status was reinstated, at least temporarily. But the data she surrendered to CBP One remains in the stack. Her facial photograph sits on DHS servers. Her biometric template is queryable through whatever search interface DHS builds next. Her GPS history is preserved in records schedules that stretch across decades. The court could restore her legal status. It could not retrieve her data. The same is true for the hundreds of thousands of others whose status was reinstated by Judge Talwani's order. Their legal identities were restored; their digital traces were not.
What readers can verify: the privacy impact assessments for CBP One and Mobile Fortify are publicly available through the DHS privacy office website, though they are updated irregularly and often lag behind deployment. The FOIA requests filed by the EFF and ACLU of Northern California are trackable through agency portals. Researchers at Georgetown Law's Center on Privacy and Technology maintain a public dataset of federal biometric contracts. A hearing on the unified DHS biometric search engine is expected before the House Homeland Security Committee this summer. The record is there for anyone who wants to read it before the stack grows another layer.