Cook County’s $1.1M AI Jail Surveillance Bid and the RAND Taxonomy: Where Criminal Justice Technology Goes From Here

Cook County’s $1.1M AI Jail Surveillance Bid and the RAND Taxonomy: Where Criminal Justice Technology Goes From Here

· 7 min read · Uncategorized

Two events this week crystallized the central tension defining criminal justice technology in 2026. Cook County’s Board of Commissioners is weighing a $1.12 million contract with BriefCam — a Canon-owned AI video analytics platform — to deploy automated surveillance at one of America’s largest jails. Simultaneously, RAND released its AI Taxonomy for Criminal Justice for the Council on Criminal Justice, a framework that essentially tells the field: before you buy another AI tool, you need to understand what category of risk you’re taking on.

The juxtaposition is instructive. One is an agency reaching for a high-stakes, high-controversy technology without a governance framework in place. The other is an attempt to build that framework before the technology outruns it. Together, they expose a pattern that’s become alarmingly common: criminal justice agencies are deploying AI faster than they can govern it, and the people most affected — detainees, defendants, and supervised populations — have the least voice in the process.

What BriefCam Would Actually Do at Cook County Jail

The Cook County Sheriff’s office frames BriefCam as a necessity born of scale. The jail generates over 1.8 million hours of video footage monthly — a volume no human team can review in real time. Sheriff Tom Dart’s office argues the system would help staff respond faster to medical emergencies and speed up investigations. The use case they highlight is overdose response: if a detainee is found unresponsive, BriefCam could scan 12–24 hours of footage across thousands of cameras in minutes to identify how narcotics entered the facility.

That specific scenario is compelling. Nine people died at Cook County Jail last year, including Martinez Duncan, whose death was ruled a homicide. If AI video analysis can identify drug smuggling pathways or detect medical emergencies faster, that’s a genuine life-safety improvement.

But here’s where the proposal unravels. BriefCam’s core technology doesn’t just find footage — it creates searchable databases of faces, movements, and physical attributes. The system can catalog individuals by “gender, clothing, weight, height, gait, and other identifying characteristics,” as Stephen Ragan of the ACLU of Illinois points out. The sheriff’s office says it won’t connect BriefCam to any biometric database, making facial recognition impossible. Critics — rightfully — ask how “analyzing physical attributes” is meaningfully different from biometric identification when the system can track a specific person across every camera in the facility.

The Bias Problem That Doesn’t Disappear With “Human Review”

The sheriff’s office says all BriefCam alerts would require human review before action. This is a standard safeguard, and it’s insufficient for a simple reason: human review of AI-generated alerts is not the same as independent human judgment.

Research in algorithmic decision-making consistently shows that when a system flags something as suspicious, the human reviewer is cognitively anchored to that assessment. This is called automation bias — the tendency to defer to machine-generated conclusions even when contradictory evidence is available. In a correctional environment where guards are already operating under stress, resource constraints, and implicit biases, an AI alert that says “suspicious activity detected in Cell Block D” is functionally equivalent to a directive.

The racial implications are particularly acute at Cook County Jail, where over half the detainee population is Black. Facial recognition and attribute-matching technologies have been documented to misidentify Black faces at higher rates than other demographics. The Baltimore case is a relevant cautionary tale: in October 2025, an AI gun-detection system at a Baltimore County school mistook a teenager’s bag of Doritos for a firearm, leading to an armed police response against a student. The system’s monitoring team attributed the error to “peculiarity of the lighting” — the kind of environmental condition that’s endemic in poorly lit correctional facilities.

RAND AI Criminal Justice Taxonomy 2026 - framework for evaluating AI applications across policing, courts, corrections, and community supervision
The Council on Criminal Justice’s AI Taxonomy report by RAND, released May 2026, provides the first comprehensive framework for categorizing AI applications in criminal justice by function, risk level, and transparency. Source: Council on Criminal Justice / RAND Corporation.

The RAND Taxonomy: A Framework Catching Up to Reality

RAND’s AI Taxonomy for Criminal Justice arrives at a moment when the field desperately needs it. The report’s central insight is that AI applications in criminal justice are “often discussed and governed as if they were a single category of technology” — when in reality, a scheduling algorithm and a facial recognition system have fundamentally different risk profiles, data requirements, and governance needs.

The taxonomy organizes AI tools by sector (Policing, Courts, Corrections, Community Supervision), automation level (Fully Automated, Human Review Required, Decision Support Only), data type, structural equity risk, and transparency level. Key findings that directly bear on the BriefCam debate:

  • Risks for complex AI systems are concentrated in high-stakes functions — exactly the enforcement and supervision decisions that BriefCam would influence inside a jail
  • AI applications relying on past criminal justice data systematically reproduce racial and socioeconomic disparities — a finding that applies to any system trained or calibrated on law enforcement behavioral patterns
  • Oversight and transparency gaps are empirically documented across agencies — meaning the governance infrastructure that should exist before deployment usually doesn’t
  • AI adoption in low-risk administrative functions is slower than in high-risk ones — a perverse inversion where agencies reach for surveillance technology before digitizing routine paperwork

RAND’s primary recommendation is unambiguous: “Prioritize safeguards over expansion in high-risk areas.” Cook County’s rush to deploy BriefCam — before an independent review of jail conditions is complete, before governance protocols are established, and over the objections of 80 community organizations — is the exact pattern the taxonomy warns against.

The Right Way to Deploy Technology in Corrections

The RAND taxonomy implicitly creates a hierarchy of AI deployment appropriateness. At the low-risk end: scheduling algorithms, document processing, and administrative workflow tools. These are high-value, low-controversy applications where AI can reduce bureaucratic burden without directly affecting individual liberty. At the high-risk end: predictive analytics for enforcement decisions, facial recognition in custodial settings, and any system that generates actionable alerts about specific individuals.

Electronic monitoring sits in an interesting middle ground. GPS tracking and supervision platforms use geospatial data, behavioral patterns, and increasingly, predictive analytics to manage supervised populations. But — critically — the best-designed EM systems operate in what RAND calls “Decision Support Only” mode rather than generating automated enforcement actions. The technology provides data; a trained officer makes the judgment call.

This distinction matters enormously. An EM system that alerts an officer when a defendant approaches a restricted zone is providing decision support. An AI video system that flags “suspicious activity” based on gait analysis and triggers a guard response is something closer to automated enforcement — regardless of whether a human technically reviews the alert before acting.

The GPS ankle monitoring industry has spent two decades learning painful lessons about false alarms. Early-generation tamper detection sensors based on heart rate, skin conductivity, and resistive circuits produced false-positive rates of 15–30%, sending officers to investigate compliant defendants and eroding trust in the entire system. The industry’s trajectory has been toward reducing false positives through better sensor technology — fiber-optic tamper detection, for example, operates as a binary signal (light passes or it doesn’t), eliminating the ambiguity that generates false alarms.

AI video analytics is at an earlier and more dangerous point on that same learning curve. The difference is that when a GPS ankle monitor generates a false tamper alarm, the consequence is an unnecessary home visit. When an AI surveillance system generates a false alert inside a jail, the consequence can be a use-of-force incident against a detainee who wasn’t doing anything wrong.

Where Should Agencies Invest?

For corrections and community supervision agencies evaluating technology investments, the RAND taxonomy provides a useful decision framework:

Invest first in administrative AI (low risk, high efficiency gain): Court scheduling, case document processing, supervision record management, compliance reporting automation. These are unglamorous applications that free up officer time for the human judgment calls that actually matter.

Invest second in transparent, well-governed monitoring tools (medium risk, direct public safety benefit): GPS electronic monitoring with validated risk assessment integration, where the technology provides situational awareness and the officer retains decision authority. The key is equipment that minimizes false positives — because every false alarm is a governance failure that erodes both officer trust and defendant compliance.

Proceed with extreme caution on AI surveillance in custodial settings (high risk, contested benefit): Tools like BriefCam offer genuine operational value, but only when deployed within a governance framework that includes independent auditing, documented explainability, bias testing on the specific population being monitored, and genuine accountability mechanisms when the system gets it wrong. Cook County hasn’t built any of these structures yet.

Avoid automated enforcement triggers entirely until governance catches up: No AI system in a correctional or supervision context should directly trigger enforcement actions without meaningful human assessment — not a rubber-stamp “human review” of machine-generated alerts, but genuine independent evaluation.

AI-powered surveillance camera in correctional facility - the deployment of automated monitoring in jails raises concerns about bias and accountability
AI video analytics systems like BriefCam can process millions of hours of surveillance footage, but their deployment in correctional facilities — where over half the detainee population at Cook County is Black — raises acute questions about algorithmic bias and proportional governance. Photo: Pexels.

The Accountability Gap

The 80 organizations opposing the Cook County BriefCam contract aren’t anti-technology. They’re anti-deployment-without-accountability. Their letter to commissioners makes a straightforward demand: before spending $1.12 million on AI surveillance, complete the independent review of jail conditions that’s been pending while people continue to die in custody.

That demand maps directly onto RAND’s recommendation to “prioritize safeguards over expansion in high-risk areas.” It’s not a radical position — it’s basic project management applied to technology with civil rights implications.

The criminal justice system will increasingly use AI. That’s inevitable. The question is whether that adoption follows the RAND taxonomy’s framework — risk-proportionate deployment with governance structures in place — or the Cook County pattern of procurement-driven implementation where the technology precedes the safeguards.

The answer matters for the 657,500 people currently in American jails, for the communities they return to, and for a justice system whose legitimacy depends on the perception — and reality — of fairness.

Related Resources

Need GPS Ankle Monitors for Your Agency?

Contact us for a consultation and product evaluation.

Contact Sales