A federal judge in the Southern District of Texas issues a standing order requiring attorneys to certify that no AI-generated content in their filings was used without human verification. Across the Atlantic, the Law Society of New South Wales convenes its most intensely debated panel in years on AI adoption. Meanwhile, legal scholars at Lawfare publish a 5,000-word research agenda warning that executive branch AI systems may fundamentally restructure the separation of powers.
These are not parallel developments. They are symptoms of the same underlying tension: the legal system is absorbing AI faster than it is developing the accountability frameworks to govern it. For criminal justice technology vendors—including those building GPS monitoring platforms, risk assessment tools, and offender supervision systems—this gap between capability and accountability is not an abstract policy concern. It is an engineering requirement.
What does “defensible AI” actually mean in criminal justice technology?
Defensible AI is artificial intelligence whose outputs can withstand legal scrutiny in adversarial proceedings. In a criminal justice context, this means every AI-generated recommendation—whether a risk score, a violation flag, or a behavioral pattern alert—must be traceable to its data inputs, explainable in plain language, and reproducible by an independent auditor.
This standard goes far beyond technical accuracy. A risk assessment algorithm that correctly predicts recidivism 85% of the time is worthless in court if a defense attorney can demonstrate that the model’s training data systematically underrepresented certain demographics, or that the system’s decision logic cannot be articulated to a jury. The Daubert standard for expert testimony in federal courts requires that scientific evidence be based on testable methodology, subjected to peer review, and generally accepted in the relevant scientific community. AI systems that cannot meet this threshold are litigation liabilities, not supervision tools.
How is AI actually being used in legal practice today—and what are the real limitations?
Nicole Byrne, a criminal and civil law practitioner at O’Brien Solicitors in New South Wales, offered a candid assessment at a recent LexisNexis and Law Society of NSW panel that cuts through the industry hype. Her position is instructive for criminal justice technology developers: “AI is a tool. A useful one, integrated thoughtfully across our practice to support research, initial drafting, and the kind of administrative work that used to consume time better spent on clients. But the analysis, the judgment, the strategy—that remains entirely human” (O’Brien Solicitors, 2026).
This framing—AI as amplifier of human judgment rather than replacement—maps directly onto how criminal justice AI should operate. The tool handles pattern recognition across massive datasets. The human applies contextual judgment, ethical reasoning, and constitutional awareness that no current AI system possesses.
Byrne’s most pointed observation addresses accountability: “The lawyer. Fully. Without qualification.” When asked who carries responsibility for AI-assisted work, there is no ambiguity. In criminal law, she notes, “mistakes can affect a person’s liberty.” This accountability framework has direct implications for electronic monitoring: when a GPS ankle monitor’s AI system flags a behavioral anomaly that triggers a warrant or revocation hearing, the supervising officer—not the algorithm—bears responsibility for the decision to act on that flag.

Why does the executive branch AI governance gap threaten criminal justice technology?
The Lawfare research agenda published by legal scholars Cullen O’Keefe and colleagues identifies a structural risk that criminal justice technology vendors cannot afford to ignore: “absent specific countervailing policy measures, further advances in AI technology are likely to empower the executive branch at the expense of the coordinate branches of government” (Lawfare, 2026).
In criminal justice, this power asymmetry is already visible. Prosecutors and law enforcement agencies have access to AI-powered surveillance, predictive analytics, and automated evidence processing tools that defense attorneys cannot match. The COMPAS recidivism prediction algorithm used across multiple U.S. states was famously challenged in State v. Loomis (2016), where the Wisconsin Supreme Court upheld its use in sentencing but required disclosure that the algorithm’s methodology was proprietary—meaning the defendant could not fully examine the basis for his sentence.
The Lawfare scholars frame this as a fundamental governance question: “AI, as currently developed, is a centralizing technology. Institutions that can exercise effective control over these factors of production, and steer them toward their own ends, will be advantaged by the AI revolution.” For criminal justice, this means that AI tools deployed by corrections and supervision agencies will inherently concentrate decision-making power—unless the tools are deliberately architected to preserve transparency, auditability, and human override capability.
What technical architecture makes criminal justice AI defensible?
Based on both the practitioner experience Byrne describes and the governance framework the Lawfare scholars propose, defensible criminal justice AI requires five architectural pillars:
1. Full decision provenance chain. Every AI-generated output must be traceable from final recommendation back through the analytical pipeline to raw input data. For GPS monitoring, this means a behavioral anomaly flag should link to the specific location data points, time windows, and pattern-matching rules that triggered it. An officer reviewing the alert—or a defense attorney challenging it in court—should be able to reconstruct exactly how the system reached its conclusion.
2. Explainability at multiple levels. The system must generate explanations appropriate for different audiences: a technical audit log for system administrators, a plain-language summary for supervising officers, and a court-admissible narrative for legal proceedings. This is not optional decoration on top of a black-box model. It is a structural requirement that constrains which AI architectures are appropriate for criminal justice applications.
3. Mandatory human-in-the-loop for consequential decisions. The Lawfare scholars ask: “When are human-in-the-loop requirements a valuable means of ensuring individual accountability for AI actions?” In criminal justice, the answer is clear: always, for any decision that could restrict liberty. AI systems should surface recommendations and supporting evidence. Humans make decisions. The system should log both the AI recommendation and the human decision, creating an attribution chain for accountability.
4. Bias detection and continuous auditing. Byrne’s firm runs training sessions on AI use “across all levels—junior lawyers, paralegals, and senior practitioners.” Similarly, criminal justice AI systems need continuous bias auditing that examines whether the system’s recommendations differ systematically across demographic groups, geographic areas, or case types. This is not a one-time certification. It is an ongoing operational requirement.
5. Adversarial testing as a design requirement. The Lawfare scholars note that courts need tools to verify “whether ExecAI has been used to violate citizens’ rights.” For criminal justice AI, this translates to a requirement that the system be designed to withstand adversarial scrutiny from the outset. Defense attorneys will challenge these systems. The technical architecture must anticipate and accommodate that challenge, not resist it.

How does AI-powered dynamic risk assessment work in GPS monitoring?
The convergence of GPS monitoring data and AI analytics creates what may be the most data-rich behavioral assessment capability in community supervision. A GPS ankle monitor generating location fixes every 5 minutes produces 288 data points per day—105,120 per year per individual. For an agency monitoring 500 offenders, that is 52 million annual location records.
Static risk assessment tools like the Level of Service Inventory-Revised (LSI-R) or the Ohio Risk Assessment System (ORAS) evaluate offenders at intake based on historical factors: criminal history, substance abuse history, employment status, and similar variables. These instruments are validated and court-accepted, but they are snapshots. An offender assessed as medium-risk at intake who subsequently loses employment, begins frequenting locations associated with prior criminal activity, and shows disrupted sleep patterns—all observable through continuous GPS data—remains classified as medium-risk until the next manual reassessment.
Dynamic risk assessment bridges this gap. The NIJ-funded IDRACS project (Integrated Dynamic Risk Assessment for Community Supervision), developed by RTI International using data from over 160,000 supervised individuals in Georgia, demonstrated that dynamic factors—drug test results, employment verification, program attendance, technical violations—are significantly more predictive of recidivism than static intake assessments (National Institute of Justice). The Swedish OxMore tool, validated on 59,676 community-sentenced individuals, confirmed these findings with c-index predictive accuracy scores of 0.74 for violent reoffending using dynamic variables.
CO-EYE’s monitoring software has incorporated this research trajectory into its platform architecture. The system’s AI-powered behavioral analysis module continuously evaluates five dimensions derived from GPS telemetry: residence stability, employment regularity, device compliance, geofence adherence, and overall behavioral pattern consistency. Rather than replacing officer judgment, the system generates dynamic risk indicators that flag which cases in an 80-person caseload require immediate attention—and, critically, provides the data provenance chain that makes each flag explainable and defensible.
What separates defensible AI from “AI-washing” in criminal justice?
The legal profession’s experience with AI adoption offers a warning to criminal justice technology. Byrne observes that “the conversation around artificial intelligence in law has been loud, fast-moving, and not always honest.” The same dynamic affects criminal justice technology, where vendors increasingly claim “AI-powered” capabilities without specifying what the AI actually does, what data it uses, or how its outputs can be verified.
The Lawfare scholars identify the core governance question: “How can Congress, the executive branch, and the American people be confident that any particular ExecAI has been procured in accordance with law and satisfies all design requirements imposed by law? How can they be confident that there are no ‘backdoors’ that would cause the AI to behave lawlessly under certain conditions?”
For criminal justice procurement teams evaluating AI-powered monitoring platforms, this question translates into a concrete technical checklist:
| Evaluation Criterion | AI-Washing Red Flag | Defensible AI Standard |
|---|---|---|
| Model transparency | “Proprietary algorithm” with no disclosure | Published methodology, auditable model architecture, explainable outputs |
| Training data documentation | Unspecified or vague data sources | Documented training datasets with demographic composition, collection methodology, and known limitations |
| Bias auditing | One-time validation study from vendor | Continuous bias monitoring with automated alerts when outputs diverge across protected classes |
| Decision attribution | Single risk score with no explanation | Multi-factor risk profile with individual factor weights, data sources, and confidence intervals |
| Human override capability | Automated actions with no human review | AI generates recommendations; human officers make decisions; both are logged with timestamps and attribution |
| Court admissibility preparation | No consideration of legal challenges | Designed to meet Daubert/Frye standards; exports include chain-of-custody metadata and audit trails |
How should GPS monitoring platforms integrate AI responsibly?
Byrne’s framework for AI in legal practice—“AI speeds up the groundwork so the team can spend more time on the thinking that matters”—is the correct model for GPS monitoring. The platform handles data collection, pattern recognition, and anomaly detection at machine scale. Officers apply judgment, context, and constitutional awareness at human scale.
In practical terms, this means GPS monitoring AI should operate in three layers:
Layer 1: Data collection and enrichment. The GPS ankle monitor collects continuous location data across multiple positioning modes (GNSS, WiFi, BLE, cellular). The platform enriches raw coordinates with contextual data: location type (residential, commercial, restricted zone), time-of-day patterns, indoor/outdoor classification, and historical behavioral baselines. CO-EYE’s tri-mode connectivity architecture (BLE + WiFi + LTE) ensures continuous data collection across all environments—critical for analytical integrity, since 24-hour battery devices create 12-16 hour daily data gaps that compromise pattern analysis.
Layer 2: Pattern analysis and anomaly detection. AI models analyze enriched data streams to identify behavioral shifts: disrupted residence patterns, irregular employment attendance, new location associations, nighttime behavior changes. Each detected pattern is scored for statistical significance and tagged with its contributing data points. This layer generates recommendations, not decisions.
Layer 3: Human decision support with full attribution. Officers receive prioritized case alerts with supporting evidence. Each alert includes: the specific behavioral pattern detected, the data points supporting the detection, the statistical confidence level, and the recommended response. The officer reviews, decides, and acts. The system logs the AI recommendation, the officer’s decision, and the rationale—creating the accountability chain that Byrne and the Lawfare scholars both identify as non-negotiable.
The bottom line: accountability architecture is the product differentiator
The legal profession and legal scholarship are converging on a clear message: AI capability without accountability architecture is a liability, not an asset. Byrne’s practitioner experience shows that the firms succeeding with AI are those that “maintain rigorous standards, protect their culture, and understand that the technology serves the practice.” The Lawfare scholars’ governance framework shows that the democratic institutions succeeding with AI will be those that build transparency, auditability, and human oversight into the technology from the ground up.
For GPS monitoring and criminal justice technology, the implication is direct: the next generation of competition is not about who has the most sophisticated algorithm. It is about who has the most defensible one. Agencies need AI that makes their officers more effective, generates insights that withstand legal scrutiny, and respects the constitutional rights of the individuals under supervision.
The vendors that build accountability into their architecture—rather than bolting it on as an afterthought—will be the ones that procurement teams trust with the most consequential decisions in community supervision. Contact our team to learn how CO-EYE’s monitoring platform integrates defensible AI into every layer of GPS supervision.



