Artificial Intelligence in Arbitration

By Michael H. Diamant

This was originally published in the Cleveland Metropolitan Bar Journal.

Artificial intelligence or AI is not new and has been around for decades. It is simply a computer program comprised of a set of algorithms, or mathematical statements, e.g., if X then Y. Various AI systems have been used for decades in everything from aircraft autopilot systems to microwave oven controls and GPS navigation systems, such as Waze. Lexis, Westlaw, and Google searches run on AI algorithms. The big change resulted, following Moore’s Law, as the microprocessors (chips), that are at the heart of all computers, became geometrically smaller and thereby geometrically faster, and consequently able to process vast amounts of information instantly. As a result, AI systems have been developed that can convert plain language inputs into algorithms and compose answers that are not limited to simply citing or quoting references but are written in plain language. Further, open AI systems are not limited to the data sets that the developer inputs, but rather can access the entire internet and “learn” from the additional data that it uncovers. The accuracy of the AI generated response is dependent on the quality of the software and, most significantly, the quality and accuracy of the data to which it has access.

AI does not think, notwithstanding the hype. It merely applies mathematical algorithms to data. AI learns by modifying its algorithms as it acquires new data. AI software can only rely on its algorithms and the data available. If the data contains errors or irrelevancies, then the output can contain errors, often referred to as hallucinations. Advocates and arbitrators are now facing both practical and ethical AI issues in their practices. The role of AI for an advocate in an arbitration is no different than in a court action. First, there is the duty to the client. Counsel should investigate thoroughly whether one must disclose to and/or have the permission from the client to use AI in drafting documents and court submissions. One could argue that using AI for research is already the norm with Westlaw and/or Lexis and often Google or another search engine. AI software can add to efficiency of the lawyer, for example, when organizing and/ or analyzing a large volume of documents or large amounts of data. However, using AI to analyze documents, raises the issue of whether it is the lawyer or the software exercising judgment.

A lawyer has a duty of competence in representing a client. (Ohio Rules of Professional Conduct, Rule 1.1) That duty implies that a lawyer using AI software must understand and be competent with its use. If used as an organizational tool, the lawyer must be operationally competent with the software. But what about querying an AI system as to a legal question, i.e., the application of the facts to law, and asking it to write an actual legal argument? Whose competence is involved, the lawyer’s or the AI software’s? Ethically, it must be the lawyer exercising judgment and not simply relying on the AI software’s analysis.

One could argue that using AI is akin to having an associate or summer clerk draft a brief or pleading. It is the ultimate responsibility of the lawyer signing the pleading to read the cited cases and statutes to assure that they are properly cited and state what is asserted in the pleading. What is key, is that the lawyer use independent judgment to assure that what the software has generated is accurate.

There have been several cases and numerous articles written about AI hallucinations producing a case name and/ or citation that did not exist. A lawyer signing and submitting such a pleading risks facing ethical issues, including Rule 11 violations. The hallucinations resulted from the AI software not being a “closed” system and accessing too broad a database, e.g., the entire internet. The hallucinations would not have occurred had the database been limited to decided actual cases. This demonstrated the lawyer’s lack of understanding of how the AI system worked. Most importantly, the lawyer then relied on the citation without checking it. Some of us older lawyers will remember when, before automated research tools, some lawyers might start research using a legal encyclopedia1 or a West Digest. We were always instructed that one had to read the referenced cases before citing them in a brief, as sometimes a footnoted case did not necessarily or specifically support the statement for which it was cited.

Another less obvious, but just as important issue, is the lawyer’s ethical obligation of confidentiality to protect a client’s privileged information. AI software is constantly training itself by analyzing the input it receives to predict the next word it will generate to respond to queries. When one uses an “open AI” program, that software takes all inputs from all users and uses that data to further train itself. It gives each work in a string of words a value. As it sees more data, i.e., sentences, it refines its predictions and the numerical values. The problem is that open AI software is not resident on the user’s computer, but is in the cloud, i.e., an untold number of servers around the world. Not only does an open AI system access the internet to seek information to answer a query, but it also inputs the data received from users to further train itself and makes that data available to train that software on all the other servers where it is resident in the cloud. Consequently, law firms should only use closed AI systems for dealing with client data. Closed AI systems are only resident on in house or controlled servers that do not share data with other servers let alone the entire internet. Of course, there are situations where client data must be limited in house also. (ORPR, Rule 1.6). Lawyers should never expose client data to open AI systems without specific client authorization.

An arbitrator’s and a lawyer’s potential use and responsibilities as to AI overlap, but are somewhat different. First, an arbitrator has a duty to protect the confidentiality of the process in its entirety. Consequently, an arbitrator should not input any information as to an arbitration into an open AI system. Second, and of equal importance, is an arbitrator’s obligation to use independent judgment to decide a case and not rely on AI. An arbitrator may use a closed AI system to generate scheduling orders and to organize timelines and exhibits, and even draft generic parts of Orders or the Award, but all determinations and findings in an Order or Award must that of the arbitrator and not generated by AI.2 However, it is the arbitrator who has been engaged by the parties to decide the dispute and is obligated to use his or her judgment not that generated by a computer program.3

One issue that applies generally, is how to authenticate evidence? Is the photo or video an actual image of reality or was it generated by AI? Is a document submitted as evidence an actual document or was it created by an AI program. The issue of authenticating evidence is not new, but, now with AI, has become potentially much more difficult and is beyond the scope of this article.

Numerous organizations have recently published guidelines for the use of AI in arbitration. The first was issued in 2024, entitled, Guidelines on the Use of Artificial Intelligence in Arbitration, by the Silicon Valley Arbitration & Mediation Center (SVAMC),4 www. svamc.org, a non-profit organization that promotes use of ADR in resolving technology disputes, vets technology neutrals, and annually publishes its Tech List of the World’s Leading Technology Neutrals.5 This year, the Chartered Institute of Arbitrators, www.CIArb.org, an international organization based in London, England, promoting arbitration and training international arbitrators (with North American and New York Branches) published its Guideline on the Use of AI in Arbitration.6 The American Arbitration Association/International Centre for Dispute Resolution, www.adr.org, issued a document entitled AAA-ICDR Guidance on the Arbitrators’ Use of AI Tools – March 2025.7 Other arbitral organizations have addressed AI with courses and certifications including the International Centre for Dispute Prevention and Resolution (CPR) www.cpradr.org and JAMS www.jamsadr. com addressed AI in its Rules.8

Lawyers and arbitrators need to be familiar with what AI is and is not, and be prepared to address the issues it raises when involved in arbitrations.9

Michael H. Diamant, is retired as a partner and remains Of Counsel at Taft. He has litigated and arbitrated around the country including the U.S. Supreme Court and tried numerous jury and non-jury cases involving complex engineering, IP, and business disputes. He has been a CMBA member since 1971. He can be reached at (216) 706-3949 or mdiamant@taftlaw.com.

  1. O Jur, Corpus Juris Secundum, or Am Jur. ↩︎
  2. Various arbitration administering organization make available closed AI systems for their arbitrators use. ↩︎
  3. American Arbitration Association, The Code of Ethics for Arbi trators in Commercial Disputes, Canons V & VI. https://adr.org/ sites/default/files/document_repository/Commercial_Code_of_ Ethics_for_Arbitrators_2010_10_14.pdf ↩︎
  4. https://svamc.org/wp-content/uploads/SVAMC-AI-Guidelines First-Edition.pdf ↩︎
  5. https://svamc.org/2025-tech-list/ ↩︎
  6. https://www.ciarb.org/media/m5dl3pha/ciarb-guideline-on-the use-of-ai-in-arbitration-2025-_final_march-2025.pdf ↩︎
  7. https://go.adr.org/rs/294-SFS-516/images/2025_AAA-CDR%20 Guidance%20on%20Arbitrators%20Use%20of%20AI%20 Tools%20%282%29.pdf?version=0 ↩︎
  8. https://www.jamsadr.com/artificial-intelligence-disputes clause-and-rules ↩︎
  9. This article was written without the use of generative AI, but Google was used to obtain the URLs in the footnotes, and Word to check spelling and grammar. ↩︎