AI-Authored Bar Exam is a Wake-Up Call for Legal Profession

The legal profession has encountered a stark reminder of the need to adopt responsible AI strategies following a recent announcement that the California bar exam, utilised artificial intelligence in the drafting of some of its questions.

The admission, which came from the State Bar of California followed earlier complaints about question quality and technical glitches. The revelation has resulted in many in the legal community expressing both disbelief and concern, alongside calls for the urgent adoption of improved understanding and responsible implementation of AI tools in regulated industries.

As reported by the Los Angeles Times, the State Bar disclosed that 23 of the 171 scored multiple-choice questions in the February exam were developed by ACS Ventures, the State Bar’s psychometrician, “with the assistance of AI.”

In addition, 48 additional questions were repurposed from an older exam intended for first-year law students. This confluence of AI-generated content and recycled material has understandably sparked debate about the rigour, relevance, and fundamental fairness of an examination designed to assess the readiness of aspiring attorneys.

“It’s a staggering admission,” Katie Moran, an associate professor at the University of San Francisco School of Law who specialises in bar exam preparation, told the LA Times.

“The State Bar has admitted they employed a company to have a non-lawyer use AI to draft questions that were given on the actual bar exam,” she said. “They then paid that same company to assess and ultimately approve the questions on the exam, including the questions the company authored.”

The State Bar of California has said that it will ask the California Supreme Court to adjust test scores for those who took its February bar exam.

Responsible AI in Legal Education and Assessment

The irony in this situation is considerable. An industry built on the meticulous interpretation of human-written law is now confronted with the situation that the very instrument designed to evaluate future legal professionals was, in part, crafted by the AI that some fear could eventually replace human legal expertise.

This raises fundamental questions about the current state of legal education and assessment. When legal assessors themselves turn to AI for assistance in defining entry requirements, it raises questions about the implications for the profession’s future.

Generative AI in Regulated Industries

The incident has highlighted the real-world implications of using large language models (LLMs), and other AI generative methodologies, within the assessment processes of all regulated industries.

While LLMs demonstrate impressive capabilities in generating text, their aptitude for creating psychometrically sound examination questions remains a subject of ongoing inquiry.

Beyond the technical considerations, the ethical implications of using AI in such crucial assessments cannot be overstated:

  • Training data for LLMs can inadvertently affirm existing biases, which could then be reflected in the generated questions, potentially disadvantaging certain groups of examinees. The State Bar’s disclosure necessitates a transparent account of the bias detection and mitigation strategies, that were implemented during the question development process. This is crucial for maintaining the fairness and equity of the licensing process.
  • Furthermore, the reported technical glitches during the February exam, while not explicitly linked to the AI-generated content, highlight the systemic complexities of integrating novel technologies into established frameworks.

These issues underscore the critical need for enhanced “responsible AI education” across various stakeholder groups. Exam and assessment developers require comprehensive training in the capabilities and limitations of AI, including best practices for output validation

Legal educators and bar administrators must cultivate a deeper understanding of the technical and ethical implications of AI-driven assessment processes to ensure whether it is appropriate and equitable in its application.

In a similar manner, policymakers and regulatory bodies across all regulated industries (banking, healthcare, etc) need to develop robust frameworks that govern the use of AI in professional licensing circumstances and emphasise accountability and transparency. While AI offers opportunities for innovation and efficiency, its application in high-stakes areas like professional licensing demands careful consideration, robust oversight, and a commitment to responsible implementation.

Any field considering AI integration in critical assessments, must heed this wake-up call to ensure that technological advancements serve to enhance, rather than compromise, the integrity and fairness of their foundational processes.


This content was generated with the assistance of AI tools. However, it has undergone thorough human review, editing, and approval to ensure its accuracy, coherence, and quality. While AI technology played a role in its creation, the final version reflects the expertise and judgment of our human editors.

ethicAil – Building Trust in AI

Scroll to Top