Best Practices for Ethical AI Use in Legal Drafting

Best AI Legal Drafting Practices

Can I use generative artificial intelligence in my legal practice to help draft documents? Can I use tools like ChatGPT to help write letters, court filings, legal memos, emails, or other documents? Are there ethical rules I should think about when using generative AI to draft legal work? Also, will there come a time when lawyers are expected to use generative AI as part of their practice?

Yes, lawyers can generally use tools like ChatGPT in their practice, but they must do so carefully and follow their professional ethical duties.

Lawyers have actually been using forms of artificial intelligence to help draft documents, emails, and letters for quite a while. For example, when someone writes in Google Docs, the program often suggests the rest of a word or even the next word in a sentence. ChatGPT works in a similar way, but on a much larger scale and with access to much more information to learn from.

However, there are some important differences between the simple predictive text features we see in programs like Word or Outlook and the newer generative AI tools, such as ChatGPT, that have become popular in recent years.

It seems entirely likely that artificial intelligence will become a regular part of how lawyers practice law. As more advanced tools become available, such as ChatGPT and other forms of “generative AI,” meaning programs that can learn from information and create text, images, and other content, many lawyers are asking new questions. For example, can they use generative AI in their practice, and will there come a time when lawyers are expected to use it as part of their work?

Lawyers spend a large part of their time drafting documents. This can include letters, court filings, legal memoranda, contracts, deeds, estate planning documents, employment handbooks, and many other materials. This article explains some of the ethical issues lawyers should think about before using generative AI to help with these everyday tasks, especially tools that go beyond the simple predictive text features found in programs like Word or email.

Right now, advanced generative AI tools for businesses are still very new. In the next few years, new software will almost certainly be created that allows law firms to run generative AI systems inside their own offices or secure networks. This would help protect client information while still allowing lawyers to benefit from AI programs that learn and improve over time.

For now, most lawyers only have access to “outside AI.” These are programs such as ChatGPT, DALL-E, Google Bard, Perplexity, ChatSonic, and many other tools that are still being developed. This article looks at the ethical issues lawyers should think about when using these outside generative AI programs to help draft documents in their daily legal work.

Most states follow versions of the ABA Model Rules of Professional Conduct, which set ethical standards for lawyers. While the exact wording may vary slightly from state to state, the core duties discussed below apply to lawyers across the United States. In 2024, the American Bar Association published Formal Opinion 512, which goes over how lawyers can perform their duties while using generative AI ethically.

Civille helps law firms drive business online by providing modern websites, data-driven marketing, and tools that help them turn casual visitors into qualified leads. Civille also helps lawyers stay up to date on fundamental changes that influence the legal profession through its curated platform and instructional materials.

Rule 1.1 (Competence)

This rule requires that a lawyer provide competent representation to a client. That includes paying close attention to the details needed to complete the work without causing avoidable harm to the client’s interests.

If you plan to use generative AI in your practice, you should try to understand how it works. You should also understand its limits. This can help you avoid mistakes that could harm your client or affect their legal matter.

A lawyer must also check any information they get from generative AI to make sure it is correct. For example, there was a case where a New York lawyer used ChatGPT to find case law but did not confirm that the cases were real. The cases turned out to be made up. In generative AI, this problem is often called a “hallucination.” The lawyer included the fake cases in court filings. See Mata v. Avianca, No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023).

Lawyers have a duty to represent their clients to the best of their ability and be honest with the court (which is addressed below). This means they have to make sure that any information generated by ChatGPT is correct before they use it to assist them in writing documents. Generative AI can occasionally make up facts (hallucinations) or omit important details. Because of this, lawyers need to check the facts and make sure they are giving their clients reliable, trustworthy representation.

Rule 1.4 (Client Communications)

This rule says that a lawyer must talk to the client in a reasonable way about how the client’s aims will be met. If a lawyer wants to utilise AI in a way that could damage the privacy of their client or the representation, they may need to tell the client and get their permission.

ABA Resolution 112 (2019) highlights the importance of accountability and oversight when artificial intelligence is used in the legal system. In some cases, lawyers may also need to tell clients about AI, especially if it could potentially damage privacy or the case. In other words, the customer should know that AI might be employed and exactly what that means.  

ABA resolutions are not binding law, but some courts treat them as persuasive guidance. For example, in Stock v. Schnader Harrison Segal & Lewis LLP, 35 N.Y.S.3d 31, 44 (2016), the court partly relied on an ABA resolution when deciding the scope of the attorney-client privilege.

Resolution 112 also says that in some situations, a lawyer may need to tell the client if they choose not to use AI, especially if using AI could benefit the client.

Because many clients are now using AI on their own, it is a good idea for lawyers to talk with their clients about whether they plan to use AI or not. Lawyers should also explain the possible risks, such as a client accidentally sharing confidential or privileged information with an AI program.

Rule 1.6 (Confidentiality)

This rule is especially important when using outside generative AI tools. If a lawyer shares confidential client information with an AI program without the client’s informed consent, it could violate the lawyer’s duty of confidentiality.

Under Rule 1.6 of the Rules of Professional Conduct, a lawyer generally cannot reveal information related to representing a client unless the client gives informed consent, the disclosure is reasonably necessary to carry out the representation, or the rule allows it in limited situations.

A lawyer should not assume that a client has automatically allowed them to share confidential information with a generative AI program. See ABA Resolution 112 (2019). Instead, the lawyer should first get the client’s informed consent.

“Informed consent” means that the client agrees to the plan only after the lawyer clearly explains the important risks and any reasonable alternatives.

Similar to Rule 1.4, before sharing any client information with a generative AI program, a lawyer must explain the important risks of using the technology. The lawyer must also discuss any reasonable alternatives to using generative AI.

In some cases, a client may already understand the risks of generative AI. If so, the client may choose to allow the lawyer to use it to help create draft documents.

Using generative AI to help draft communications with opposing counsel may raise ethical concerns if confidential client information is shared with the AI system or if the lawyer fails to review the content carefully.

For similar reasons, lawyers should be cautious about using generative AI to draft court filings. Preparing pleadings often requires sharing details about the client and the case, which could mean disclosing confidential information to an outside AI program.

AI Hallucinations and Rule 3.3 (Candor to the Tribunal)

As mentioned earlier, some lawyers have already gotten into serious trouble for using ChatGPT to help draft court filings. In those cases, the information provided by ChatGPT turned out to be false. Courts and ethics authorities have increasingly warned lawyers that they must verify AI-generated citations before filing documents with the court.

Some lawyers have already gotten into serious trouble for using generative AI tools like ChatGPT to help draft court filings. In those situations, the AI program produced information that looked real but turned out to be false.

Rule 3.3 says that a lawyer cannot knowingly make a false statement of fact or law to a court. It also says that if a lawyer realizes they previously gave the court false information about an important fact or law, the lawyer must correct it.

It is now well known that ChatGPT and other generative AI programs sometimes make mistakes called “hallucinations.” This means the program may give answers that sound correct but are actually false. Because of this, lawyers should not rely only on generative AI for legal research. If they do, they could end up citing case law that does not exist.

Submitting AI-generated citations without verifying that the cases actually exist can lead to serious sanctions and may violate a lawyer’s duties of competence and candor to the court. This kind of conduct is prohibited under the rules governing candor to the court, competence, and professional misconduct.

If a lawyer chooses to use AI for early legal research, they must carefully check every citation to make sure it is real and accurate. They must also make sure they do not share any confidential client information with the AI program.

As a general reminder, even when AI is not involved, lawyers still have a duty to make sure that any cases or legal citations they give to the court are real and correct.

Rules 5.1–5.3 (Supervision and Responsibilities for Non-Lawyer Assistance)

These rules explain a lawyer’s duty to supervise other lawyers and staff who help provide legal services.

For example, Rule 5.1 says that lawyers who manage a law firm must take reasonable steps to make sure the lawyers in the firm follow the Rules of Professional Conduct. Rule 5.3 focuses on supervising non-lawyer assistants, such as staff or others who help provide legal services. Rule 5.2 explains that lawyers who work under supervision must still follow the rules themselves, even when they are acting under a supervising lawyer’s direction, except in limited situations.

In 2012, the ABA updated Model Rule 5.3 to make it clear that the rule applies to help provided by non-lawyers, including services from outside vendors or technology platforms.

Ethics authorities have explained that Rule 5.3 can apply when lawyers rely on outside vendors or technology tools, meaning lawyers must supervise how those tools are used. Under this rule, lawyers who use AI tools in their work must make sure those tools are used properly. This means making sure the technology does not reveal confidential client information and checking that any information created by AI is accurate and complete before using it in legal work.

When Can Lawyers Use Generative AI Tools Like ChatGPT to Draft Legal Documents?

There are some situations where lawyers may be able to use generative AI in an ethical way. This is more likely when the lawyer knows that using the technology will not break any ethical rules and when the client has given informed consent.

Before using AI to draft documents, lawyers should review the program’s terms and conditions. This helps them understand how the tool works and avoid possible copyright problems or other risks that could harm the client.

Examples of situations where AI may be used appropriately include:

  • Drafting marketing materials, blog posts, or newsletters for clients.
  • Creating basic templates that lawyers can later customize.
  • Preparing first drafts of general documents, such as litigation hold letters, as long as they do not include client or case information, and the lawyer reviews them carefully.
  • Creating a first draft of a contract that does not include client details and is thoroughly reciviewed by the lawyer.
  • Drafting general discovery request templates that can later be tailored to a specific case.
  • Assisting with early drafts of proposed legislation.

Once AI tools can be used securely inside law firms, where client information can be protected, they will likely become a more common part of everyday legal work. Even then, lawyers will still be responsible for reviewing the work and making sure any AI-generated content is accurate, meets professional standards, and protects the client’s interests.

Other Ethical Considerations  

To meet their duties of competence and diligence, lawyers should also be careful when reviewing documents created by other people.

For example, a family law attorney may receive documents from someone who does not have a lawyer. These documents might include photos or videos that appear harmful to the attorney’s client’s case. Lawyers should not automatically assume that these materials are real. It is possible that the photos or videos were created or changed using generative AI.

Many people have heard the terms “hallucinations” or “deepfakes.” These refer to information, documents, photos, videos, or recordings created by AI that show events that never actually happened.

Lawyers should keep these possibilities in mind when they review evidence or documents provided by other parties.

As AI becomes more advanced, it may be used to analyze patterns or inconsistencies in data, although the use of AI to detect deception remains an evolving area. AI tools are already used in e-discovery to help find missing documents or identify patterns in large amounts of data. As this technology continues to develop, lawyers should be ready to deal with AI-related issues when reviewing discovery and evidence.

Will Lawyers Eventually Be Expected to Use AI?

Some experts believe that this could happen in the future. For example, Rule 1.5 says that lawyers must charge reasonable fees. Some commentators have suggested that if technology significantly reduces the cost of legal services, lawyers may need to consider whether using it could help provide services more efficiently.

In some cases, the hazards of utilising AI may be more than the benefits for now. But as AI tools get better and safer for legal work, they could help lawyers do their jobs faster and save their clients money.

A lot of bar associations and professional groups are looking into how AI is changing the way lawyers work. They are also making guides to help lawyers figure out how to best employ AI in their profession.

As technology changes the legal field, law firms need to be aware of the pros and cons of technologies like generative AI. Civille helps lawyers make their digital practices better by giving them modern law firm websites, marketing methods that focus on clients, and technologies that help people find the correct legal services. Visit Civille’s website or read more on the Civille blog to find out more.

Share in social networks:

Home page cta bg

The Whole Truth And Nothing But

We hold this truth to be self-evident: there is freedom in transparency. We believe you should have access to how your website and marketing are performing, allowing for the best decisions possible. We show you all the evidence and make our recommendations based on that evidence. Let’s talk.

Civille

What Can We Help You With Today?







              Powered by