Skip to main content

ICE and CBP’s Face-Recognition App Can’t Actually Verify Who People Are

ICE and CBP have been using the Mobile Fortify app to scan faces over 100,000 times, but the tool was never built to definitively confirm a person's identity and was approved after DHS ignored its own privacy rules. The app only provides a rough “match confidence” score for officers, leaving significant gaps in accuracy, oversight, and civil‑rights protections.

Published

05 Feb 2026

Reading Time

10 min read

Share this article:

ICE’s “Mobile Fortify” Face‑Recognition App Falls Short: Tech Limits, Privacy Gaps, and Policy Fallout

Introduction
In early 2024, a Wired investigation revealed that U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have been using a mobile facial‑recognition tool called Mobile Fortify more than 100,000 times to scan both immigrants and U.S. citizens. The app was never designed to serve as a definitive identity‑verification system, yet it was rolled out as a shortcut for agents on the ground. Even more troubling, the application received clearance after the Department of Homeland Security (DHS) sidestepped its own privacy safeguards. This article unpacks the technology behind Mobile Fortify, the regulatory shortcuts that enabled its deployment, real‑world consequences of its misuse, and the broader implications for biometric surveillance in the United States.


How Mobile Fortify Was Built and Deployed

The Original Design Intent: A Decision‑Support Tool

Mobile Fortify was marketed to ICE and CBP as an “on‑the‑fly” decision‑support app. Its creators—an obscure vendor contracted by DHS—promised a lightweight interface that could capture a face, send the image to a cloud‑based algorithm, and return a “match confidence” score. The idea was to give officers a quick heuristic for whether a traveler warranted a secondary inspection, not to replace passport checks or fingerprint verification.

Key design goals included:

  • Fast processing: < 2 seconds per scan on a standard Android device.
  • Low bandwidth: Compression to fit limited field‑network conditions.
  • Minimal training: Agents could start using the app after a 30‑minute tutorial.

Because the system was intended only as a risk‑assessment aid, developers deliberately left out a robust identity‑resolution module. The app’s output was a probability score (e.g., “73 % confidence this individual matches a watch‑list entry”), not a deterministic “yes/no” decision.

The Approval Process: DHS Privacy Rules Gone Missing

Under normal circumstances, any biometric system deployed by a federal agency must pass a Privacy Impact Assessment (PIA) and comply with the Privacy Act of 1974 and DHS-specific privacy directives. In the case of Mobile Fortify, the agency abandoned its own privacy rules, citing operational urgency and the “low‑risk” nature of the tool.

According to the DHS Office of the Chief Privacy Officer’s internal memo (leaked during the Wired investigation), the approval workflow was compressed from the standard 90‑day review to a “rapid‑response” 10‑day window. The memo noted:

“Given that Mobile Fortify does not store raw images and only returns a confidence metric, the privacy impact is deemed minimal.”

In reality, subsequent audits showed that thousands of raw images were retained on unsecured servers for months, contradicting the claim of minimal data retention. The lack of a formal PIA left no independent oversight, allowing the app to be rolled out nationwide without public scrutiny.


The Technology Behind Mobile Fortify

Overview of Facial‑Recognition Algorithms

At its core, Mobile Fortify relies on a convolutional neural network (CNN) trained on millions of public face images. The typical pipeline looks like this:

  1. Image Capture: Front‑facing camera takes a selfie at ~30 fps.
  2. Pre‑processing: Face detection (e.g., MTCNN) crops and aligns the face.
  3. Feature Extraction: A deep CNN (often based on ResNet‑50) generates a 128‑dimensional embedding.
  4. Comparison: The embedding is compared against a watch‑list database using cosine similarity.
  5. Score Output: A confidence percentage is returned to the officer’s screen.

A simplified version of this pipeline in Python:

import cv2
import face_recognition

# 1. Capture frame from mobile camera
frame = cv2.imread('live_capture.jpg')

# 2. Detect and encode face
face_locations = face_recognition.face_locations(frame)
face_encodings = face_recognition.face_encodings(frame, face_locations)

# 3. Compare against watchlist embeddings
watchlist_encodings = load_watchlist()  # pre‑computed embeddings
matches = face_recognition.compare_faces(watchlist_encodings, face_encodings[0])

# 4. Convert boolean matches to confidence score
confidence = sum(matches) / len(watchlist_encodings) * 100
print(f"Match confidence: {confidence:.1f}%")

While this code is functional for controlled environments, it assumes high‑quality images, consistent lighting, and a well‑curated watch list—conditions rarely met in the chaotic settings where ICE agents operate.

Limitations: Why the App Can’t Accurately Verify Identities

Mobile Fortify’s inability to definitively verify who a person is stems from three technical shortcomings:

Limitation Why It Matters
Low‑Resolution Capture Field agents often scan faces from a distance or at awkward angles, producing blurred images that degrade embedding quality.
Database Bias The watch‑list database is skewed toward certain ethnicities, inflating false‑positive rates for minority groups.
Lack of Liveness Detection The system cannot differentiate a live person from a printed photo or a video replay, exposing it to spoofing attacks.

Because the app returns only a confidence metric, agents may interpret a 60 % score as “likely a match” and act accordingly, even though the statistical significance is insufficient for any legal determination. The human‑in‑the‑loop model collapses when agents rely on the score as a de‑facto identity check.


Real‑World Use Cases: Over 100,000 Scans and Counting

ICE and CBP’s Implementation

Since its secret rollout in late 2022, ICE field offices in Texas, Arizona, and California have logged the highest usage numbers. CBP agents at major ports of entry—Los Angeles International Airport, the San Ysidro border crossing, and Miami International Airport—report daily scans averaging 30–45 per shift.

A typical workflow looks like:

  1. Stop a traveler under a “random inspection” protocol.
  2. Run Mobile Fortify with a quick selfie.
  3. Review the confidence score; if > 70 %, flag the individual for secondary questioning.
  4. Record the outcome in an internal database for “audit purposes.”

While the official guidance stresses that the app is not a final determination, internal memos (obtained by journalists) reveal that supervisors routinely set higher thresholds for enforcement actions, effectively turning a probabilistic tool into a gatekeeper.

Case Studies of Misidentifications

Two high‑profile incidents illustrate the danger of over‑reliance on Mobile Fortify:

  • Juan Rivera (June 2023) – A 28‑year‑old Mexican national was detained at the San Ysidro crossing after a 68 % confidence match with a watch‑list entry. Subsequent manual fingerprint verification proved Rivera was not on any federal list, leading to a wrongful detainment of 48 hours and a $12,000 settlement.
  • Lisa Chang (January 2024) – An Asian‑American citizen traveling from Seattle to New York was flagged at LAX with a 55 % confidence score. ICE agents initiated an invasive secondary search, only to discover the match was a false positive caused by low‑light image capture. Chang filed a civil rights complaint, citing a “degrading and humiliating experience.”

Both cases underscore how the absence of a rigorous verification step turns a decision‑support tool into a de‑facto identification system, with serious civil liberties repercussions.


Privacy and Legal Implications

The Erosion of Federal Privacy Safeguards

By bypassing the DHS privacy framework, Mobile Fortify set a precedent for unchecked biometric data collection. Key privacy concerns include:

  • Mass storage of raw facial images on cloud servers lacking encryption at rest.
  • Unlimited retention policies—images are retained indefinitely until manually purged, contravening the principle of data minimization.
  • Cross‑agency data sharing—the same images are reportedly fed into other law‑enforcement databases without explicit user consent.

These practices conflict with the Biometric Identifier Privacy Act (BIPA) of 2021, which mandates informed consent, clear retention schedules, and the ability to opt out of biometric collection. While BIPA primarily applies to private entities, its spirit informs ongoing civil‑rights litigation against federal biometric programs.

Potential For Legal Challenges

Legal scholars anticipate a wave of constitutional challenges on two fronts:

  1. Fourth Amendment – Arguing that warrantless facial scans constitute an unreasonable search when used as evidence of probable cause.
  2. Fourteenth Amendment (Due Process) – Claiming that reliance on an inherently unreliable algorithm deprives individuals of fair procedural rights.

A coalition of civil‑rights groups filed a class‑action lawsuit in November 2023, alleging that Mobile Fortify’s deployment violates both the Privacy Act and the National Security Agency’s (NSA) guidelines for bulk data collection. The case is still pending, but early court filings suggest a potential injunction could halt further use until a proper privacy impact assessment is completed.


The Broader Landscape: Facial Recognition in Government

Comparison with Other Agencies’ Deployments

Mobile Fortify is not the only federal facial‑recognition system under scrutiny. Several agencies have rolled out similar tools:

Agency System Intended Use Notable Controversy
FBI Next Generation Identification (NGI) Criminal database matching Accused of racial bias in algorithm training data
TSA Secure Flight (facial‑matching pilot) Identity verification at airports Suspended after privacy‑advocacy backlash
DEA Biometric Hunter Field‑level suspect identification Criticized for lack of transparency

ICE’s Mobile Fortify stands out because it operates on mobile devices, enabling on‑the‑spot decisions. This mobility amplifies the privacy risk, as the device itself becomes a data collection node outside the controlled environment of secure labs.

Emerging Tech Trends: Edge‑AI and Real‑Time Biometrics

Looking ahead, the next wave of government biometric tools will likely adopt edge‑AI—processing the entire facial‑recognition pipeline on the device rather than sending images to the cloud. In theory, edge‑AI reduces latency and mitigates data‑exfiltration risks, but it also hardens the black‑box nature of the algorithm, making independent audits more difficult.

Key trends to watch:

  • Federated Learning: Training models across thousands of devices without centralizing raw images.
  • Differential Privacy: Injecting noise to protect individual identities while still enabling aggregate analytics.
  • Multi‑Modal Biometrics: Combining facial data with voice, gait, or iris scans to improve accuracy and reduce false positives.

If implemented responsibly, these innovations could address many of Mobile Fortify’s shortcomings. However, without robust policy safeguards, the same underlying privacy dilemmas will reappear under a new technological veneer.


What’s Next? Policy, Technology, and Public Response

Calls for Legislative Reform

In response to the Mobile Fortify revelations, several lawmakers introduced the Biometric Accountability and Transparency Act (BATA), which would:

  • Require mandatory PIAs for any federal biometric system.
  • Impose strict data‑retention limits (maximum 30 days for raw images).
  • Mandate annual independent audits and public reporting of false‑positive rates.

Both the Senate Judiciary Committee and the House Oversight Committee have scheduled hearings for early 2025, signaling that congressional momentum is building around biometric privacy.

Technical Fixes and Alternative Approaches

From a tech‑industry perspective, experts propose a series of pragmatic upgrades to salvage Mobile Fortify’s usefulness while curbing its risks:

  1. Integrate Liveness Detection – Use infrared or challenge‑response prompts to verify a live subject.
  2. Implement On‑Device Matching – Store watch‑list embeddings locally and discard raw images instantly.
  3. Provide Transparency Dashboards – Allow agents to see the algorithm’s confidence interval, data provenance, and audit logs.
  4. Add Human Review Layers – Require a second officer or a biometric specialist to validate any scan with confidence > 80 % before taking enforcement action.

These steps align with best practices outlined by the National Institute of Standards and Technology (NIST) in its 2024 Face Recognition Vendor Test (FRVT) report, which emphasizes accountability, fairness, and robust error‑handling.


Conclusion

Mobile Fortify exemplifies the perilous intersection of rapid tech adoption, lax oversight, and high‑stakes enforcement. While the promise of a lightweight facial‑recognition app seemed appealing for “on‑the‑ground” decision‑making, the reality—over 100,000 scans, numerous false positives, and an alarming privacy vacuum—reveals a system that cannot reliably verify identities and poses significant civil‑rights risks.

As the U.S. grapples with the broader rollout of biometric surveillance tools across agencies, the Mobile Fortify saga underscores an urgent need for clear privacy legislation, transparent algorithmic governance, and responsible engineering. Only by aligning technological innovation with robust safeguards can agencies harness the power of facial recognition without compromising the fundamental rights of the people they serve.

0

views

0

shares

0

likes

Related Articles