Newsletter

Don't get burned by AI tools: lessons for lawyers from the Otter.ai lawsuit

Image of desk and keyboard, text reads: Don't get burned by AI tools: lessons for lawyers from the Otter.ai lawsuit  Good Journey Consulting Newsletter Issue 42

Issue 42 

On August 15, 2025, a California resident named Justin Brewer filed a class action lawsuit against Otter.ai, Inc. (“Otter”), alleging that Otter recorded and used his conversations without prior consent.[i] The complaint alleges that Otter offers an AI-powered meeting assistant called Otter Notetaker, providing real-time transcription during meetings.[ii] Mr. Brewer alleges he participated in a meeting where Otter Notetaker was used by another participant to transcribe the conversation, and he was not informed that Otter would use his communications to train its AI models, nor was his consent obtained by Otter to record or use his communications.[iii] The complaint further alleges that Otter fails to disclose to Otter account holders who enable Otter Notetaker for meetings that the meetings are being used to train Otter’s AI models.[iv]  

The complaint explains that when an Otter account holder’s Otter Notetaker joins a meeting, it will request permission from the meeting host (if not an Otter account holder) to join and record the meeting, but does not seek consent from any other meeting participants, nor can other meeting participants disable Otter Notetaker during the meeting.[v] Further, the complaint alleges that in order for Otter to send a pre-meeting message to obtain consent from meeting participants, a default setting must be turned from off to on.[vi]  

The complaint acknowledges that Otter’s privacy policy discloses that it trains its AI models with Otter transcripts and audio recordings.[vii] However, the complaint contends that Otter does not seek consent of any meeting participants besides the Otter accountholder to use recordings or transcripts for AI training, and instead attempts to shift its legal obligations to accountholders to make sure that necessary permissions have been obtained.[viii] Further, the complaint alleges that while Otter claims to de-identify audio recordings, upon information and belief, confidential information is not removed, and speaker anonymity is not guaranteed.[ix]  

The complaint alleges claims of violations of the Electronic Communications Privacy Act of 1986, the Computer Fraud and Abuse Act, the California Invasion of Privacy Act, and the California Comprehensive Computer Data and Fraud Access Act, intrusion upon seclusion, conversion, and violation of the California Unfair Competition Law.[x]  

This lawsuit underscores some of the potential pitfalls of using AI tools without first performing due diligence. Below are two lessons lawyers can take from the lawsuit against Otter to reduce their risk of becoming embroiled in an AI-related nightmare.  

Lessons for Lawyers 

  1. Law firms need workplace policies defining which AI tools can be used by employees. 

The State of AI in Business 2025 report from MIT’s Project NANDA found that while 40 percent of companies surveyed had purchased subscriptions for AI tools, workers from over 90 percent of companies surveyed admitted using personal AI tools for their work.[xi] The unauthorized personal use of AI tools in a workplace is known as shadow AI. Even if the percentage of workers at law firms engaging in shadow AI is somewhat lower, any unauthorized use of AI tools by employees could potentially present a risk to the confidential data of the law firm’s clients, as well as the firm’s data. For example, if someone used an unauthorized and unvetted AI meeting notetaker for a client meeting or a partnership meeting, confidential data could potentially be inadvertently disclosed to the AI company, which could retain the data and use it for training purposes. Additionally, depending on the privacy laws of the jurisdiction, using a notetaking AI tool may require the consent of all meeting participants.  

For these reasons, law firms and other legal organizations need to draft policies governing workplace AI use, and all employees need to be educated about the reasons for these policies. If you would like more information about how you can reduce your AI risk, I’ve developed a free resource to help you get started. A Lawyer’s First Three Steps to Reduce AI Risk will walk you through three actionable steps you can take for a quick and impactful reduction of your AI risk. It includes a checklist you can use to keep track of your progress as you work through the steps, and some additional things to consider once you are ready to move forward with managing your AI risk more broadly. You can sign up for this resource here.  

  1. AI tools need to be vetted before being used in law practice. 

A law practice is not a place to casually experiment with AI tools. The risks to your clients and your reputation are too great, and your time is too precious. Before an AI tool is used in a legal practice, it needs to be properly vetted. As described above, a lawyer or other staff member who tries out an AI tool such as a meeting notetaker without assessing the potential risks could expose confidential client data, or confidential organization data, or could run afoul of the jurisdiction’s privacy laws.  

Workplace AI use should be governed by an organization’s leadership. The process of selecting an AI tool for a legal practice involves: 

  • Strategically getting clear on how new technology could make the biggest impact in your organization; 
  • Finding technology solutions that align with your organization’s needs; 
  • Evaluating the risks associated with the options available to your organization; and 
  • Testing your options before selecting and implementing one or more tools. 

If you would like a resource that can guide you step by step through evaluating and implementing AI tools, as well as provide you with a curated directory of over 200 AI tools developed for lawyers, organized by use cases, practice areas, and integrations with other AI tools, please take a look at A Lawyer’s Practical Guide to AI. You can get instant access to the guide here.   

Thanks for being here.  

Jennifer Ballard
Good Journey Consulting 

P.S. If you know a lawyer who might be interested in reducing their AI risk, please consider forwarding this issue of the newsletter to them. Thank you. If you are new to this newsletter, welcome! If you are interested in receiving AI news and analysis for lawyers every other week, you can sign up to have this newsletter delivered to your inbox here.    

 

[i]  Class Action Complaint at 2-3, Brewer v. Otter.ai, Inc., No. 5:25-cv-06911 (N.D. Cal., filed Aug. 15, 2025). 

[ii] Id. at 2. 

[iii] Id. at 10. 

[iv] Id. at 2, 7. 

[v] Id. at 6. 

[vi] Id

[vii] Id. at 7. 

[viii] Id. at 7-8. 

[ix] Id. at 8.   

[x] Id. at 12-25. 

[xi] Aditya Challapally, Chris Pease, Ramesh Raskar, Pradyumna Chari, The GenAI Divide State of AI in Business 2025 at 8, MIT NANDA, July 2025, https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf.

Stay connected with news and updates!

Join our mailing list to receive the latest legal industry AI news and updates.
Don't worry, your information will not be shared.

We will not sell your information.