A professor shows future lawyers how to put AI in its place

Written by 

rep blankenship prelaw ai 550pxJudge Jim Hughey from the Tenth Judicial Circuit Court of Alabama in Jefferson County and AI “interpreter” Graycie Elliot, a UAB Pre-Law student, listen to ChatGPT's testimony in the Pre-Law showcase. Image courtesy Brandon Blankenship, J.D., and UAB ArtsThe students learned to punch the “stop generating” button as quickly as possible.

This spring, undergraduates in the UAB Pre-Law Program were taking part in one of the regular “showcase” events organized by program director Brandon Blankenship, J.D., teaching assistant professor in the J. Frank Barefield, Jr. Department of Criminal Justice.

In 2023, the U.S. Supreme Court ruled that late artist Andy Warhol violated a photographer’s copyright when he used a photograph of singer Prince for a series of silk-screened works. Could Blankenship’s students convince a jury that generative AI tools are doing the same thing when they base their images on copyrighted works in their training data? Or make the case for AI’s innocence? The key question, Blankenship said, is “if AI makes an image and bases its creation of that image on somebody’s known work, is that a copyright violation?”


Watch what AI says

Some of the students argued on behalf of a real local artist, playing the plaintiff, while others advocated for AI. A real local judge adjudicated, and the audience at UAB’s Abroms-Engel Institute for the Visual Arts acted as the jury. AI, the defendant, was played by ChatGPT. A smartphone took the stand so that students could orally ask questions and the ChatGPT app could answer. “ChatGPT did an excellent job of taking a position and defending its position,” Blankenship said. “But the longer the students engaged with it, the more likely it would end up undermining its own case by contradicting something it said earlier.”

Eventually, “we had what the judge ended up calling an ‘interpreter,’” Blankenship said. “The phone was on the stand, and a student was allowed to hit the button to cut it off. They learned that the first thing the AI said was the best you were going to get.”

After two hours of debate, the jury voted — in favor of AI. “Part of the artist’s argument was that she had suffered financial loss,” Blankenship said. “The jury sympathized with the artist, but they did not believe that the artist had proved that the value of her work was harmed. The jury said, ‘We think that what AI has done is wrong, but not illegal.’”

rep blankenship 550pxBrandon Blankenship, J.D.Impressive, especially in limited doses, but dangerous when left to itself — that about sums up Blankenship’s own experience with AI. He is no enemy of the technology; in fact, he relies on it regularly to schedule meetings, take the opposite side of arguments as a devil’s advocate and help flesh out emails. “I’m a very terse email writer,” Blankenship said; “I’ll get a page-and-a-half email and answer, ‘Yes.’ So now I let AI help me with those email responses.”

But Blankenship also is a stickler about having students document their AI use in assignments. He does the same with a line he has added to his email signature. “I’m just trying to reinforce that you should disclose it,” Blankenship said.


AI as legal research tool

An exercise Blankenship added to his Bill of Rights class in the spring 2024 semester helped him get across an important point about AI as a research tool. “It is not hard to find a law, especially when it comes to rights,” Blankenship said. “But you don’t know if it’s good law. You don’t know if the Supreme Court has overturned an earlier case, or what the modern right has evolved into.”

Blankenship’s classroom approach is heavy on role play and team-based learning. In the Bill of Rights class, he assigned one group of students in his class to dig into a case with traditional research methods: Go and look up the issue, go to the books and pull the case, go to Shepard’s Citations, which is a tool that gathers subsequent decisions that cite the case. “The challenge is, if you go read the subsequent decisions and determine that your case is good law, there is still a lot of critical thinking you have to do to determine if it applies to your set of facts,” Blankenship said.

Another group of students was assigned to type the facts about the case into an AI tool and ask for an opinion. “AI gives a wonderful answer,” Blankenship said.

Using that research as the basis for a class debate on the issue, Blankenship encouraged the teams “to look for error in the other group and correct it,” he said. The AI team’s argument sounded better, putting them ahead in the first round of debate. But as the class period went on to subsequent rounds, the AI students found that, while they could “articulate a good argument, they didn’t know how they’d arrived there,” leaving them open to the critical questions of the other side. “In the second, third and fourth rounds, the traditional team ended up pulling ahead and decisively beat the AI team,” Blankenship said.

Eventually, most students see that the best way forward is a hybrid approach, he noted: “They say, ‘Let’s go look at what good law is, and then AI is going to help us communicate it more effectively.’”

The point of the exercise, Blankenship said, “is that AI has a good place and is a great tool, but we can’t take the human out of the loop.” Using AI tools quickly becomes “a hard habit to break,” he added. “I try to get them to use the tool as a coach and an assistant, not as a substitute.”


Interdisciplinary research on AI to increase public trust in judicial decisions

Blankenship is using generative AI for a research project of his own, collaborating with Stacy Moak, Ph.D., professor in the UAB Department of Political Science and Public Administration; and Baocheng Geng, Ph.D., and Qing Tian, Ph.D., in the UAB Department of Computer Science, as well as colleagues at Vanderbilt AI Law Lab and other institutions. They are developing a tool to identify suspected bias in judicial decision-making by flagging instances where the charges against the defendants do not match the evidence admitted by the court. “AI techniques have the potential to safeguard against human subjectivity and lack of transparency, thereby increasing public trust and constitutional protections if properly deployed,” the researchers write in their project summary.

The human will stay in the loop throughout, Blankenship explains. Where the AI verdict differs from the actual court order, a human review “will report on the basis for the difference, especially commenting on suspected bias, such as where too much or too little weight was given to the particular admitted evidence,” the researchers say.

In the current state of generative AI technology, “you have to do a lot of work to determine whether these tools are reliable,” Blankenship said. “That’s where you see lawyers filing briefs in court with information that was hallucinated by AI. That’s not the tool’s fault. It’s the lawyer’s, student’s and scholar’s responsibility to make sure there is integrity in what they are presenting.”