LAWYERS who use fake AI-generated case citations can be struck off the roll, meaning they no longer have the right to practice.
“No shortcuts, no excuses”, the Law Society of South Africa (LSSA) warned yesterday.
It comes amid a growing number of instances where attorneys are guilty of citing case law which does not
Speaking on behalf of the LSSA, Azhar Aziz-Ismail, the chairperson of the Johannesburg Attorneys Association said such matters were referred to the Legal Practice Council (LPC) for disciplinary hearing.
“Submitting court documents that contain AI-generated, fictitious case citations constitutes serious professional misconduct, regardless of whether this occurs knowingly or unknowingly,” he said.
The penalties can be severe; ranging from fines and suspension to being struck off the roll, depending on the circumstances and gravity of the misconduct.
“Beyond formal sanctions, attorneys also face significant reputational damage, which can have lasting consequences for their careers and professional standing,” he said.
Recent disciplinary actions in South Africa, he says, underscore the profession’s “zero-tolerance” approach to misleading the court.
Presently there are not any formal policies or regulations in place specifically governing the use of generative AI by licensed lawyers but the LSSA was setting up an AI Committee. “This will develop comprehensive guidelines for lawyers, in-house legal teams, regulators, judges, tribunals, and other stakeholders on the responsible, appropriate, and ethical use of AI in line with the required standards of professional conduct,” said Aziz-Ismail.
It was incumbent on every lawyer, he stressed, to verify all information, including that generated by AI, before using it in any official legal capacity. He said this was clear in both South African and international case law. The expectation, he said, was clear, “lawyers must not rely solely on AI outputs without independent verification.”
He also pointed out that attorneys are responsible for supervising staff work and ensuring research accuracy, regardless of its source.
“In practice, this means that AI should be treated as a tool to assist with legal research and productivity, not as an unquestioned authority, and every output must be rigorously checked to maintain the integrity of the justice system,” said Aziz-Ismail.
He told the Independent on Saturday that so far the LSSA has not received any complaints related to the use of AI by legal practitioners. If this happened it would be reported to the Legal Practice Council which was the regulator, for investigation and appropriate action.
He said the LSSA has been proactive in providing guidance and education on AI in legal practice and its appropriate use by lawyers as well as emerging best practices.
Later this year it would also provide official training on AI, responsible AI use and prompt engineering.
“Currently, the LSSA does not maintain a list of recommended or approved AI tools, nor has it established formal vetting criteria for such tools. However, the soon-to-be-established AI Committee will consider the viability of developing such recommendations or standards as part of its mandate to support and guide the profession,” said Aziz-Ismail.
He stressed that the stakes are high, not just for practitioners, but for public trust in the justice system. “Additionally, there are broader risks that must be addressed when using AI, including intellectual property, data protection, confidentiality and legal professional privilege amongst others, all of which require careful consideration to ensure responsible and ethical use of these technologies in legal practice.”
Lawyer and UKZN lecturer Professor Donrich Thaldar said that in the most recent judgement relating to AI misuse which also involved a senior counsel, the court stressed that where technology is used the lawyer always remains responsible for the final content. “The acceptance is that people will use new technologies such as AI. I would personally go further and I actively encourage my students to use AI, " he said.
“In actual practice, I think in many attorneys' firms, will expect young lawyers, the young article clerks to be able to use AI, not only in a responsible way, but also in an effective way.
Thaldar said he tested AI with several difficult cases and it performed extremely well; in some cases better than most humans, However, he wouldn't trust it blindly. "AI is just a sounding board," he said. "You need to be a master of using it, you must be able to use it effectively, but also responsibly” said Thaldar. He stressed that everything generated by AI should be tested.
“You cannot just accept it. Copy-paste that name of the case, X versus Y, immediately, go to Google and search for that case. And if you don't find it, you immediately know it was a hallucination. If you find it, then you need to read it. You need to open that case and then go through to find the exact sentence or paragraph where that statement was made, and then make sure that in the context, the AI actually interprets it correctly.”
However, AI is more than a trend; it’s here to stay. Results of a poll by online magazine, Innovating With AI (IWAI), show that 83% of the respondents prefer AI searches over ‘traditional’ Googling.
Reason: it’s more efficient for getting answers.
Professor Surendra (Colin) Thakur, an expert in 4IR and Digitisation at Unisa, says while it’s attractive for legal practitioners to “quickly” research case law using AI tools, this efficiency comes at a cost. He says that Generative Pre-trained Transformers (GPTs) like ChatGPT may produce false responses known as hallucinations.
“A Large Language Model (LLM) like a GPT is goal-directed," says Thakur. He stresses that it seeks to ‘answer’ a query. “All the words in the query response are a statistical probability… It also adds creativity and other dimensions to the response. It does not deal in fact.” He stresses that it's crucial to cross-reference every generated statute, case law, or legal principle. “It may well be that you are fact checking another judgment which itself has a hallucination!” His advice: “It is your name and your reputation. The very least you can do is fact check,” says Thakur.
Professor Manoj Maharaj, an expert in Information Systems & Technology at UKZN says the user - not AI - is to blame for false information. “AI just generates language based on your input… It’s not deliberate. It’s not human.” Maharaj stresses that while it should make you more productive, using it to generate case law is unethical. “It's not designed to do that and if you use something that's not fit for purpose, then you're going to face the consequences.”
He says right now ethics training “is far more important now than it has ever been.” Maharaj says despite the rapid advancement of AI, regulation is lagging. He warns that AI cannot replace professionals like lawyers, doctors or journalists, professions where ethics is crucial. “Don’t let AI usurp your intellect. Let it enhance it…”
The LPC had not responded to questions by the time of going to press.