Artificial intelligence technology continues to make waves across the world including in the legal field. And now it has passed a bar exam.
According to an academic paper, there are many large language models connected to AI including the organization OpenAI’s generative pre-trained transformer models or GPT. Essentially users can input questions and the AI can spit out some astounding answers.
OpenAI’s model GPT-4, which is used to scale up deep learning, recently passed the Uniform Bar Exam scoring a 297, which is well above passing in every state. Colorado is a Uniform Bar Exam jurisdiction and has a passing score of 270 — the state lowered its bar exam score from 276 to 270 earlier this year. The overall pass rate in Colorado for the July 2022 exam was 70%.
Previously GPT-3.5 scored a 50% and only took the multiple choice portions, but did pass both the evidence and tort portions. In the latest test, GPT-4 took the multiple choice sections, along with the essays and the performance test.
According to the most recent paper, it scored drastically higher in some of the multiple choice sections, including contracts and evidence, when compared with students. It also scored the lowest for civil procedures which is the category students struggle the most with.
Daniel Katz, a law professor at Illinois Tech’s Chicago-Kent College of Law was the co-author of a recent paper about the experiment. Katz told Law Week researchers don’t know for sure why it scored lower in civil procedures, acknowledging it could simply be more difficult to understand.
For the essay portion, researchers compared GPT-4 to different models and the latest version scored much higher, hitting stronger notes for what the correct answer should be. Katz added GPT-4 isn’t perfect in some situations, but the errors it makes are common when comparing it with students.
“The errors are the ones I would expect,” Katz said.
Under the Multistate Performance Test portion of the exam, students are given a packet of information to go through to help answer complex legal issues. According to the paper, the MPT required the test-taker to imagine they are in another jurisdiction that may be different from what the real law is. Researchers were surprised GPT-4 largely avoided the trap of citing legal principles that would appear to be on the right path, according to the paper.
Katz said when considering how this could impact the future of the bar exam, it all boils down to the conditions a student is allowed to take a test.
Colorado Attorney Regulation Counsel Jessica Yates told Law Week there are ample protections in place that don’t allow applicants to access external resources when taking the bar exam.
“For example, applicants cannot take cell phones into the room where they take the bar exam,” Yates wrote. “The computer-based portion of the exam uses an application that blocks the user from accessing other applications or software on the computer while the user is accessing the exam.”
Katz tells his students there’s a big opportunity knocking with the technology, adding that some of his friends got their first jobs in firms because they knew something about the internet. He said if the students can talk intelligently about GPT, it could give them a leg up on other job seekers.
“Generally, people don’t learn that stuff,” Katz said. He said not only could it be used to improve someone’s practice for efficiency, but also legal issues that crop up for clients.
Recently Casetext launched its own program CoCounsel, which is an AI legal assistant that’s powered through GPT-4.
“Our AI legal assistant is the first of its kind,” said Jake Heller, co-founder and CEO of Casetext in a press release from March 1. “It creates a momentous opportunity for attorneys to delegate tasks like legal research, document review, deposition preparation, and contract analysis to an AI, freeing them to focus on the most impactful aspects of their practice.”
GPT-4 has also taken other standardized tests including scoring a 163 on the LSAT which is in the 88th percentile. The program also scored a 710 on the SAT evidence-based reading and writing and a 700 on the math portion in the 93rd and 89th percentile respectively.