Points to Consider When Developing an AI Use Policy
Whether you are in-house, outside counsel, a solo practitioner, or working alongside any of these teams, you should be aware of the duty to use artificial intelligence (AI) responsibly, ethically, and in compliance with the laws of all jurisdictions in which you do business. The following points focus on core attorney obligations and practical considerations for developing your own AI use policy.
Key Attorney Obligations When Using Generative AI:
Competence
Attorneys remain responsible for the accuracy, reliability, and legal sufficiency of any work product generated with the assistance of generative AI (GAI). This includes:
- Developing a reasonable understanding of the capabilities, limitations, and risks posed by GAI tools.
- Understanding how tools handle input and training data.
- Exercising independent professional judgement and thoroughly reviewing all GAI outputs for accuracy and completeness.
- Recognizing that the attorney—and not the tool—owns any resulting errors or issues.
Confidentiality and Data Security
- Attorneys must assess whether the use of a particular GAI tool presents a risk of compromising client-confidential information, including evaluating the tool’s security practices, data retention policies, and risk of disclosure.
- Sensitive or confidential information should never be input into unvetted tools.
- Consideration should be given to only using enterprise or local AI platforms, or at least platforms with contractual confidentiality guarantees.
- Extra caution is required for trade secrets: entering them into any AI tool— even local/enterprise tools—could weigh against a determination that reasonable measures were taken to keep the information confidential.
- Data-isolation functionality (e.g., local device encryption and restricted-access storage) can help mitigate risk.
Supervisory and Management Responsibilities
- Attorneys in supervisory roles must establish clear internal policies and training programs for the ethical use of GAI by attorneys, non-attorney staff, and third-party vendors.
- Implement review and validation processes for any GAI-assisted work and oversee outsourcing or use of third-party GAI services.
- Some courts impose reporting and certification requirements on GAI-produced outputs. Attorneys must ensure compliance with all applicable rules.
Candor to Tribunals
- Attorneys representing a company in contested matters, such as litigation investigations or administrative proceedings, must comply with requirements of candor and truthfulness (e.g., ensuring no filings, correspondence, or testimony relies on unverified or fabricated/hallucinated GAI output).
- Attorneys must ensure compliance with jurisdictional and local rules for reporting and certification requirements.
Informed Consent
- Attorneys must obtain the client’s informed consent before using GAI tools in the representation.
Management of Outside Counsel
- Attorneys should ensure that outside counsel discloses any intended use of GAI, secures the company’s informed consent, and structures fees in a manner that reflects actual attorney time or demonstrable value added.
Additional Policy Considerations
- Maintain an internal AI usage log for key matters documenting which tool was used, its purpose, and the validation and security steps taken.
- Review AI policies (and specific GAI tool capabilities and risks) regularly to keep pace with the rapidly evolving landscape.
- Be aware that European, Canadian, and US state-based legislation may provide further restrictions.
- Special considerations apply to personally identifiable information (PII), particularly with respect to potential model training or inadvertent data reproduction. Ensure appropriate data-cleaning processes are in place to anonymize PII before use in any AI system.
Key Takeaways:
- Attorney responsibility cannot be delegated to AI. Generative AI can assist with tasks, but attorneys remain fully accountable for the accuracy, reliability, and legal sufficiency of all work product.
- Confidentiality must be protected at all times. Sensitive or confidential information should only be used with vetted tools that offer strong data-security and confidentiality safeguards—and never with public, non-enterprise platforms.
- Verification is essential. All AI-generated content must be independently reviewed and validated to avoid errors, hallucinations, or inaccuracies.
- Policies and training are critical. Firms and legal departments should implement clear AI use policies, training programs, and supervision protocols for attorneys, staff, and third-party vendors.
- Regulatory requirements continue to evolve. Stay aware of jurisdiction-specific rules and court directives—including disclosure or certification obligations—related to AI usage.
- Client consent matters. Attorneys should obtain informed consent before using AI tools in client work and ensure that outside counsel follows the same practice.
- Data privacy and trade secrets require heightened caution. PII and trade secret information trigger additional risks, including potential loss of confidentiality protections if mishandled.
- Policies should be regularly updated. The rapid pace of AI development necessitates periodic review of internal policies, tools, and risk assessments.
Please note that this article is not intended to be exhaustive, but rather to provide guidance for policy development.