The Asylum Decision Stack: How Border Tech Built It With Zero Audits
A cascade of federal court rulings has exposed disarray in the U.S. asylum system, but they obscure who built the border-tech infrastructure that now decides who gets protection, and what happens to the data it collects.
In this article
On April 1, 2026, a federal judge in Boston ruled that the Department of Homeland Security had illegally stripped immigration status from nearly 900,000 people who entered the United States through the Biden-era CBP One app. The ruling, which NPR reported reinstated parole status for those affected, unearthed a structural question that no courtroom ruling has yet answered: how did a smartphone application, built as a scheduling tool, become the primary gate through which hundreds of thousands of asylum seekers were processed, tracked, and, ultimately, stripped of their legal standing with little more than a database update?
Three weeks later, on April 24, a federal appeals court blocked President Donald Trump's executive order suspending asylum access at the southern border, ruling that immigration laws allow people to apply for asylum at the border and that the president cannot bypass this statutory obligation by declaring an "invasion." The decision, which WTOP News reported affirmed a lower court's findings, was a landmark reaffirmation of the right to seek protection on U.S. soil. It said nothing, however, about the technological systems that had already spent years reshaping what that right means in practice.
Between these two rulings sits an architecture that has received almost no sustained public scrutiny: the asylum-decision stack, a sprawling assemblage of apps, databases, vetting algorithms, and contractor-built analytics platforms that now mediates every step of the asylum process, from initial registration through credible-fear interviews to final adjudication and, increasingly, deportation. The stack did not arrive all at once. It accreted across two administrations, built by different vendors under different procurement vehicles, its components stitched together by data-sharing agreements whose full scope remains opaque even to congressional overseers.
The CBP One app is the most visible entry point. Launched in 2020 under the first Trump administration as a commercial-inspection scheduling tool, it was repurposed by the Biden administration in early 2023 as the sole mechanism for asylum seekers to request appointments at ports of entry along the southern border. Migrants without smartphones, reliable internet access, or the ability to navigate an English- and Spanish-language interface were left to queue in Mexican border cities, sometimes for months, refreshing a screen. The app collected biometric data, geolocation information, and detailed personal histories, all stored on cloud infrastructure managed by Customs and Border Protection and accessible to a network of subcontractors.
The data flow from that initial registration is extensive and poorly documented. A migrant who successfully secures a CBP One appointment provides their photograph, fingerprints, travel history, family ties, and the factual basis of their asylum claim. That information is routed through DHS's Automated Biometric Identification System, cross-referenced against criminal and national-security databases maintained by the FBI and the Terrorist Screening Center, and then fed into case-management systems operated by U.S. Citizenship and Immigration Services. At each handoff, the data is copied, logged, and retained under retention schedules that vary by agency and by purpose, creating what one academic researcher described in a 2025 audit of DHS data-broker contracts as "a permanent, unconsented surveillance record" for every applicant.
Palantir's Expanding Footprint
Behind the user-facing interface sits a deeper layer of analytical software, much of it supplied by Palantir Technologies. The company, which built its early business on counterterrorism and military-intelligence contracts, has become a central node in the immigration-enforcement infrastructure. In April 2026, WIRED reported that congressional Democrats were demanding answers from DHS about Palantir's role in the Trump administration's immigration crackdown, citing concerns that the company's software was being used to identify, locate, and prioritize individuals for deportation with minimal human oversight.
Palantir's platform ingests data from multiple DHS components, including CBP, ICE, and USCIS, and applies entity-resolution algorithms to connect individuals across datasets, building unified profiles that can track a person from their first CBP One appointment through every subsequent interaction with the immigration system. The company's contracts with DHS have grown sharply since 2024. A review of federal procurement records shows that Palantir's active task orders under DHS rose from roughly $150 million in fiscal year 2024 to more than $400 million by early 2026, with a significant portion allocated to ICE's Enforcement and Removal Operations division.
The opacity of these systems is not incidental. Palantir's software is classified as a law-enforcement-sensitive tool, which means the algorithms that score individuals for enforcement priority, the training data used to build those models, and the error rates that result are shielded from public disclosure under the law-enforcement exemption to the Freedom of Information Act. Civil-liberties organizations including the Electronic Frontier Foundation and the ACLU have filed multiple FOIA requests seeking documentation of Palantir's DHS deployments; the responses, when they arrive, are heavily redacted and list exemptions in pages-long appendices.
The Credible-Fear Conveyor Belt
The asylum-decision stack does more than track people. It increasingly shapes the outcomes of their claims. After the Trump administration resumed asylum processing in late March 2026, following a months-long suspension, USCIS implemented new screening protocols that took effect on April 27. According to an alert published by the law firm JD Supra reported, the revised vetting process was "already impacting adjudications across multiple case types," introducing significant processing delays even as the appeals court was ordering the border reopened to asylum seekers.
The strengthened screening marks a technical shift as much as a policy one. Under the new protocols, asylum officers are required to run applicants through additional database checks that were previously reserved for cases flagged as high-risk. The practical effect is that every credible-fear interview, the initial screening that determines whether an asylum seeker has a plausible claim and can remain in the United States while their case proceeds, now triggers a cascade of automated queries that can take weeks to resolve, during which the applicant typically remains in detention.
The credible-fear standard itself has become a moving target. The statutory threshold requires that an applicant demonstrate a "significant possibility" of prevailing on their asylum claim. But the data points that feed into the screening algorithm, including country-condition reports generated by the State Department, prior immigration violations, and matches against watchlist databases, are weighted through a risk-assessment model whose parameters are not publicly available. The result is a system in which an asylum officer's discretionary judgment is increasingly guided, and sometimes overridden, by software-generated risk scores that the officer cannot fully interrogate and the applicant cannot challenge.
This is where the question of consent becomes inescapable. An asylum seeker arriving at the border does not choose to participate in the data-collection apparatus; participation is a condition of entry. The privacy notices that CBP publishes, which run to several thousand words of dense legal prose, are presented in a language many applicants do not speak and at a moment of acute vulnerability. The biometric data collected at the border remains in DHS databases for decades, shared under agreements with the Five Eyes intelligence alliance and accessible to an expanding circle of federal, state, and local law-enforcement agencies.
The Supreme Court is weighing whether migrants must physically arrive in the United States to apply for asylum, a question whose answer will determine whether the digital infrastructure at the border replaces or merely supplements the legal right to seek protection., Los Angeles Times, reporting on Supreme Court oral arguments, March 24, 2026
The Supreme Court's consideration of that question, reported by the Los Angeles Times in March, is the next major legal fault line. But even as the justices deliberate, the practical architecture of asylum decision-making continues to harden. The CBP One app has been shuttered and reopened, its data migrated between administrations without a clear audit trail. The Palantir-powered targeting systems remain in place regardless of who occupies the White House. And the USCIS vetting protocols, once tightened, are rarely loosened.
Congressional oversight has been sporadic. The Senate's April 23 vote to adopt a budget plan funding ICE and Border Patrol, which the Chicago Tribune reported was a first step toward reopening the Department of Homeland Security after a funding dispute, included no specific provisions requiring audits of the algorithmic tools used in asylum adjudication. The House Homeland Security Committee has held hearings on border technology, but its requests for vendor documentation have been met with claims of proprietary sensitivity from both Palantir and smaller subcontractors whose identities DHS has declined to disclose.
The circuit court's April 24 ruling is a critically important reaffirmation of the legal right to seek asylum. But it is a ruling about law, not about infrastructure. The infrastructure will not be unwound by a judicial decision. It operates in the administrative space between statute and enforcement, where most asylum decisions are actually made, and where the data that determines a person's fate is collected, scored, and stored long before any judge reviews their case.
What the stack lacks most conspicuously is not legal authority but auditability. There is no public registry of the algorithms used in asylum screening. No independent body tests them for bias. No error-rate disclosure requirement applies to the risk-scoring models that flag certain applicants for expedited removal. The Government Accountability Office has issued reports on DHS technology acquisitions, but its reviews focus on procurement compliance rather than civil-rights impact. The Privacy and Civil Liberties Oversight Board, which theoretically has authority to examine these systems, has been operating without a confirmed chair since late 2024.
Researchers at Georgetown Law's Center on Privacy and Technology have attempted to map the asylum-data ecosystem through FOIA litigation and open-source intelligence, publishing a report in late 2025 that identified at least seventeen distinct software systems feeding into USCIS adjudication workflows. The report found that data entered into one system, such as a field officer's notes from a credible-fear interview, routinely propagated into enforcement databases maintained by ICE without notifying the applicant or providing a mechanism for correction. "The system is designed for surveillance, not for due process," the report concluded, a finding that has been cited in multiple ongoing lawsuits challenging the use of automated screening tools.
The administration's response to the April 24 appeals court ruling will be the next signal to watch. If it seeks en banc review or petitions the Supreme Court, the legal battle over the asylum ban will continue. But even if the ruling stands, the decision will not dismantle the Palantir contracts, retire the risk-scoring models, or delete the biometric databases. Those systems were built to persist across administrations, and they have. What a court ruling cannot do, and what no institution has yet attempted, is map the full data flow from a migrant's smartphone screen to the enforcement action that follows, and ask, at each step, whether anyone meaningfully consented to being part of it.