The American courtroom has always been a conservative institution. It has its rituals, its architecture, its formal language, and its deep resistance to novelty that hasn’t been tested by experience and precedent. Witnesses stand and swear oaths. Evidence moves through chains of custody. Attorneys argue before juries that have changed remarkably little in their composition or function since the country was founded.
That conservatism is now straining against a wave of technological change that is arriving faster than courts can develop rules to govern it. Virtual reality headsets are appearing in stand-your-ground hearings. AI-generated deepfakes have been submitted as authentic testimony. Evidence presentation has moved from paper exhibits to high-definition interactive displays. Real-time transcription tools are replacing stenographers. And the legal frameworks governing all of this are still under construction.
The result is a courtroom that is changing in ways that are simultaneously exciting, legally contested, and genuinely concerning for the integrity of the fact-finding process that trials exist to serve.
Key Takeaways
- In December 2024, Broward County Judge Andrew Siegel became the first American judge to use virtual reality in court proceedings, donning an Oculus Quest 2 headset to experience a crime scene recreation from the defendant’s perspective in a stand-your-ground case.
- A California case uncovered what appears to be one of the first instances of a deepfake submitted as purportedly authentic testimony — where Judge Victoria Kolakowski noticed a nearly motionless face and strange cuts in witness footage later determined to be AI-generated.
- Proposed Federal Rule of Evidence 707, released for public comment in August 2025 and under active consideration, would subject machine-generated evidence to the same reliability standards as expert testimony — but critics note it only covers evidence the proponent acknowledges as AI-generated, not disputed deepfakes.
- Louisiana became the first state with a statewide framework specifically addressing AI-generated evidence, effective August 2025, requiring attorneys to exercise reasonable diligence in verifying the authenticity of evidence they submit.
- Research consistently shows that witnesses who testify via video are perceived as less credible by jurors than those who appear in person — raising fairness concerns that courts have not fully resolved despite the pandemic’s normalization of remote proceedings.
- AI-powered transcription, translation, and case management tools are improving court efficiency and expanding access to justice for self-represented litigants — while simultaneously raising concerns about bias, accuracy, and the equity implications when wealthier parties have better access to cutting-edge presentation technology.
Virtual Reality in the Courtroom: The Florida First
The most dramatic courtroom technology development in recent memory happened not in a federal appellate court or a high-profile criminal trial, but in a stand-your-ground hearing in Fort Lauderdale.
On December 14, 2024, Broward County Circuit Court Judge Andrew Siegel put on an Oculus Quest 2 headset and stepped into a virtual recreation of a crime scene. The case involved Miguel Albisu, a wedding venue owner charged with aggravated assault, whose defense team had built a VR simulation of the incident from the defendant’s perspective. Defense attorney Ken Padowitz — described by the American Bar Association as “a veteran trial attorney with a long history of embracing emerging technology for use in court” — presented the simulation not as direct evidence but as a demonstrative exhibit to illustrate and contextualize the defense’s argument.
What Judge Siegel experienced in that headset was built through forensic analysis, allowing him to understand not just the words of testimony but the physical conditions of the space, the sight lines, and the spatial relationships that the defendant claimed justified his actions. Legal analyst David Weinstein, speaking to Local 10 News, noted the unsettling implication: if judges can experience a 3D reconstruction of disputed events, it begins to raise questions about when a jury is even necessary. What does cross-examination look like for a simulation? Who authenticates the accuracy of the virtual reconstruction? And what happens when the simulation, however carefully made, contains assumptions that favor one party?
These questions don’t have answers yet. What the Florida case demonstrated is that VR in courts is no longer theoretical. It happened, and courts are now going to have to decide what rules govern it.
The Deepfake Crisis: When AI-Generated Evidence Reaches Judges
The scenario that courts have been dreading arrived quietly in a California civil case. Judge Victoria Kolakowski was reviewing video testimony from a witness in Mendones v. Cushman & Wakefield when something felt wrong. The witness’s face was nearly motionless. There were strange cuts. Her mannerisms seemed to repeat in unusual patterns. Judge Kolakowski identified the footage as AI-generated audio and video of a real person. The self-represented plaintiffs had submitted a deepfake as authentic witness testimony.
The National Center for State Courts has described the incident as appearing to be among the first cases where a deepfake was submitted as purportedly authentic evidence and detected as AI-generated. The detection in this instance depended entirely on a judge’s intuition about something feeling visually wrong — not on any systematic technical process or evidentiary framework. That is not a repeatable or scalable approach to protecting the integrity of the fact-finding process.
The broader problem is structural. Technologies designed to detect AI-generated content have proven unreliable, and humans are generally poor judges of whether a digital artifact is real or fabricated. Detection tools and generation tools are in a continuous arms race, with generation consistently staying ahead. As the University of Chicago Legal Forum has noted, the generation and detection of fake evidence will be an ongoing cat-and-mouse game — and courts need rules for handling it now, not after it becomes routine.
The challenge of deepfake evidence doesn’t stop at manufactured testimony. In Wisconsin v. Rittenhouse, defense counsel challenged prosecution efforts to zoom iPad video evidence, arguing that Apple’s pinch-to-zoom function uses AI that could manipulate footage. The court required expert testimony that the zoom function would not alter underlying video — testimony the prosecution could not provide on short notice. The case illustrates how AI authentication concerns can arise even with evidence no one claims was fabricated, simply because AI touches so many steps in the digital chain of custody.
The Regulatory Response: Rule 707 and Louisiana’s Lead
Courts and policymakers have begun to respond, though the legal architecture is still taking shape.
The most significant pending development at the federal level is proposed Rule of Evidence 707. Released by the Committee on Rules of Practice and Procedure of the Judicial Conference for public comment in August 2025, the rule would subject machine-generated evidence to the same reliability standards currently applied to expert testimony — essentially requiring proponents of AI-generated materials to demonstrate their reliability before admissibility is granted.
Critics have identified a critical gap: Rule 707 applies only to evidence that the proponent acknowledges was created by AI. It does nothing to address evidence whose authenticity is in dispute — which is precisely the scenario the Mendones deepfake case presented. A party submitting a fabricated video as genuine won’t announce it as AI-generated. Rule 707, as currently drafted, leaves that problem largely unaddressed.
Louisiana moved independently and earlier. Its framework, effective August 1, 2025, became the first statewide law specifically addressing AI-generated evidence by requiring attorneys to exercise reasonable diligence in verifying the authenticity of evidence they submit. The standard shifts professional responsibility onto counsel — creating malpractice and disciplinary exposure for submitting AI-generated material without adequate verification.
California’s Judicial Council AI Task Force is developing guidance for evaluating AI-generated evidence, and the National Center for State Courts has published bench cards to help judges assess both acknowledged and disputed AI-generated materials. The bench cards represent a practical acknowledgment that the legal framework hasn’t caught up with the technology — and that judges need some guidance now, even if it’s informal.
How Evidence Is Actually Being Presented: The Display Revolution
The deepfake and VR conversations happen at the frontier. What’s already transforming trials, quietly and pervasively, is the evolution of evidence presentation technology in everyday courtrooms.
Interactive high-definition displays, document cameras, wireless presentation systems, and touch-screen interfaces have replaced physical exhibits, blown-up photographs, and overhead projectors in most modern courtrooms. Attorneys can now annotate digital evidence in real time, display documents side by side for comparison, zoom into fine print or small objects, synchronize deposition testimony with live witness testimony to expose inconsistencies, and present evidence in chronological timeline formats that help juries understand complex event sequences.
The effects on trial dynamics are real. Jurors arrive in courtrooms shaped by years of interacting with smartphones, tablets, and dynamic digital interfaces — with expectations about information that static oral presentation simply doesn’t meet. Attorneys who continue relying exclusively on traditional methods risk losing jury attention in complex cases involving financial documents, medical records, technical specifications, or surveillance footage.
The implications extend to jury deliberations. Modern jury rooms are increasingly equipped with high-resolution displays that allow jurors to review digital evidence during deliberations — accessing the same footage, documents, and exhibits from the trial room rather than relying on memory or handwritten notes. Touch-enabled screens and wireless content sharing encourage engagement, reduce misunderstandings, and give jurors equal access to materials regardless of seating position.
There is, however, an equity dimension that courts have not adequately grappled with. A wealthy plaintiff presenting a virtual reality reconstruction of an accident scene, developed with AI and forensic experts, creates an experience for a jury that a defendant without comparable resources cannot rebut in kind. Advanced presentation technology advantages well-resourced parties. When one side has photorealistic animations and the other has diagrams, the visual asymmetry may influence juror perception regardless of which party’s factual account is more accurate.
Remote Testimony: Benefits, Persistence, and the Credibility Problem
The COVID-19 pandemic forced courts to expand remote testimony at an unprecedented scale and speed. What was initially an emergency measure has become permanent infrastructure in many jurisdictions — used for geographically distant expert witnesses, vulnerable participants, and routine hearings that don’t require in-person attendance.
The technology works. High-definition video conferencing platforms enable clear communication, shared evidence presentation, and the basic mechanics of examination and cross-examination. Federal courts have found remote testimony constitutionally permissible in many contexts, including in cases where two-way video was held to preserve the “face-to-face confrontation” required by the Sixth Amendment.
The credibility research, however, is consistently troubling. Multiple studies have found that witnesses who testify via video are perceived by jurors as less honest, less credible, less accurate, and less convincing than witnesses who appear in person. Children testifying remotely are viewed as less accurate, less intelligent, and more likely to be making up their accounts. Judges may set higher bail and impose harsher sentences on defendants who appear remotely rather than in person. These are not trivial effects in a system where credibility assessments determine guilt, innocence, and liberty.
The inconvenient conclusion is that the technology that expanded access to courts may systematically disadvantage certain categories of participants — particularly crime victims who testify remotely, children in abuse cases, and defendants who cannot physically appear. Courts are only beginning to grapple with whether these effects are real at scale, reversible through better implementation, or an irreducible feature of remote participation that requires deliberate policy responses.
AI on the Administrative Side: The Efficiency Argument
Behind the dramatic headlines about VR and deepfakes, AI is doing something less visible but potentially more consequential: changing how courts manage their own operations.
AI-powered transcription is being adopted to address stenographer shortages and reduce transcript costs. Courts are generating unofficial or preliminary real-time transcriptions that accelerate case documentation, reduce delays, and improve accessibility for self-represented litigants navigating complex procedures. Translation tools provide on-the-fly interpretation for non-English speakers, addressing a chronic underfunding of court interpreter services that has long disadvantaged defendants and witnesses with limited English.
Case management AI tools help courts identify backlogs, predict caseload trends, and match defendants to diversion or treatment programs. In Argentina, the AI legal assistant Prometea helped legal professionals process nearly 490 cases per month, compared to just 130 before its introduction — nearly a 300% productivity increase. Courts facing overwhelming caseloads and constrained budgets are increasingly viewing AI adoption as the only scalable path to reducing backlogs that cause delays lasting years.
UNESCO developed new Guidelines for the Use of AI Systems in Courts and Tribunals in late 2025, responding to what it described as rapid and uneven adoption across justice systems worldwide. The guidelines include 15 principles covering information security, auditability, and the requirement that human judgment remain responsible for actual decisions. Senior UK judge Dame Victoria Sharp warned of “serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused” — a warning that applies equally to the administrative applications of AI as to its use in evidence presentation.
The Access Question Courts Haven’t Fully Answered
The overarching tension running through all of courtroom technology is one of access and equity. Technology that improves the experience of well-resourced parties while remaining out of reach for underfunded defendants and pro se litigants doesn’t improve justice — it reshapes its distribution.
When prosecution agencies funded by government budgets use AI to analyze surveillance footage, identify faces, and present forensic reconstructions, while public defenders lack equivalent tools, the resulting asymmetry may undermine the adversarial process’s ability to reach accurate verdicts. When sophisticated evidence presentation technology is available only to parties who can afford trial consulting firms and forensic animation specialists, courtrooms become theaters of resource inequality as much as forums for fact-finding.
The ABA’s Year 2 Report on the Impact of AI on the Practice of Law notes that AI adoption in the legal profession has entered a new phase — the early ethical hand-wringing has given way to operational deployment. That deployment is happening unevenly, in courts as everywhere else. The question courts need to be asking is not just whether a given technology is effective, but whether its effects are distributed in ways that serve or undermine the constitutional promise of a fair trial.
The rules are being written right now, in real cases, by judges and committees working faster than the technology allows. What they decide will shape how justice is done for a generation.

