The American Bar Association’s Standing Committee on Ethics and Professional Responsibility on July 29 released Formal Opinion 512 focused on generative artificial intelligence tools.
Law Week has been tracking the impacts of AI on local legal practice. From disciplinary actions from the state’s presiding disciplinary judge to law firm use cases of the newer technology, many attorneys Law Week has spoken with have seemed wary of the implications of AI in law or have already embraced the tech for regular use at local firms.
Opinion 512 delves into AI use in legal practice. Specifically, the opinion addresses attorney duties, cybersecurity concerns, client communication, supervision requirements and reasonable legal fees.
Competence
The ethics committee pointed to Model Rule 1.1, which notes lawyers are obligated to provide competent representation to clients. The duty binds attorneys to understand the benefits and risks associated with technologies used to deliver legal services. But the committee notes that lawyers don’t need to be experts in AI.
The ABA committee explained attorneys need to have a reasonable understanding of the capabilities and limitations of the specific AI tool they might use for their practice. But the committee acknowledged this can’t be a static undertaking and will need to be an ongoing task to stay abreast of developments in the tech.
Opinion 512 lists out some risks to competency requirements for attorneys posed by AI use. First, the risk of unreliable, incomplete or discriminatory output could result in inaccurate or harmful legal advice to clients or misleading representations to courts and other parties.
The committee advises attorneys to always independently verify all information provided by an AI system, in line with the requirements in Rule 1.1. But it notes the level of independent verification may depend on the system and the specific task the attorney asked of the AI. The committee also suggests starting with smaller tests and subtasks to verify accuracy in stages or to allow for review of smaller portions at a time.
Opinion 512 notes the tech may become more advanced over time and may eventually be a mandatory or more widely used tool in legal practice.
Confidentiality
The ethics committee mentions Rules 1.6, 1.9(c) and 1.18(b) in connection with confidentiality requirements for attorneys. It advises attorneys to first evaluate the risk information given to AI will be disclosed or accessed by others in some way before using the tech.
AI tools differ in security protocols and constraints. As an independent variable in the representation that can’t be fully controlled by the attorney or their firm if they’re part of one, the AI may disclose improperly if it’s a self-learning tool. Information fed to self-learning AI is often fed back through the AI so it may teach itself over time. The committee notes improper disclosures to outside parties may occur if information one attorney gave the AI is later revealed in response to another lawyer’s prompts while they work on a separate matter. Those attorneys who get that information may share the output with other clients, file it with courts or otherwise disclose it.
The committee explains client consent to use a self-learning AI tool must be fully informed. Part of obtaining that informed consent requires attorneys to explain the full extent of disclosure risks. Merely adding boiler-plate provisions to engagement letters authorizing the lawyer to use the tool isn’t sufficient to fulfill this requirement, the committee asserted.
Because of the uncertainty around self-learning AI tools, the committee admitted evaluating risks and obtaining fully informed client consent will be challenging. It advises lawyers who want to use this kind of tech to read and understand the terms, privacy policy and any other related policies of the AI tool they want to use. In this way, they could learn who has access to the information users feed to the system. Attorneys may also benefit from consulting with internal or external IT professionals prior to implementing use of AI to their practice.
Communication
Model Rule 1.4 requires attorneys to communicate with clients and builds on the obligations of attorneys as fiduciaries. Specifically, the rule requires lawyers to explain things fully to clients so they can make informed decisions regarding representation.
Depending on the circumstances, the ethics committee noted client disclosure may be unnecessary. But it explained lawyers need to disclose AI practices if asked by a client about how they conduct work or if AI was used in doing so.
Rule 1.4 may require lawyers to discuss use of AI even if unprompted by the client. The committee posed the example of an attorney who wants to give an AI tool information related to representation. They must also consult with clients about using AI if it’s relevant to the basis or reasonableness of the attorney’s fee. The committee went on to note that consultation about using AI may be needed when the output could influence a significant decision about the representation.
The ethics committee also noted it’s impossible to list out every situation where a lawyer needs to inform clients about AI use. It advised attorneys to consider the tech’s importance to a particular task, the significance of that task for the overall representation, how the AI will process the client’s information and the extent to which the knowledge of the lawyer’s use of the tool would impact the client’s evaluation of or confidence in the attorney’s work.
Miscellaneous Considerations
The committee, citing Rules 3.1, 3.3 and 8.4(c), explained lawyers using AI in litigation have ethical responsibilities to clients and the courts. Specifically, lawyers can’t bring frivolous claims, knowingly make false statements of law or fact to a tribunal or engage in dishonesty, fraud, deceit or misrepresentation.
Model Rules 5.1 and 5.3 require lawyers charged with managerial responsibilities to create effective measures to ensure lawyers at the firm conform to the rules of professional conduct. In this way, managerial lawyers need to establish clear policies regarding the law firm’s permissible use of AI, the committee explained. These supervisory attorneys also need to be sure lawyers and staff comply with professional obligations when using AI and that they are trained on the ethical and practical uses of the tools relevant to their work alongside risks associated with using the tech. Managerial lawyers are required to independently investigate, verify and understand the security protocols and practices and risks of any AI tools the firm uses.
Fees, which are covered in Model Rule 1.5, extend to charging clients for AI tools when the attorney and client agree on a flat or contingent fee. Factors in the rule apply to determining the reasonableness of charging clients for the use of such technology. If using AI allows an attorney to complete tasks more quickly, it may be unreasonable for attorneys to charge the same flat fee when they use the AI tool as when they aren’t using it. The committee also suggests attorneys should count AI tools as overhead costs unless a contrary disclosure to the client was made in advance and is justified under the model rules related to fees and as discussed in Formal Opinion 93-379.