Zoom's Controversial AI Policy: From Under-the-Radar to Under Fire
Recently, Zoom faced criticism over an update to their terms of service, introduced in March and enacted on July 27, 2023:
10.4: "You agree to grant and hereby grant Zoom [a license to] Customer Content …" for a long list of purposes, including "machine learning" and "artificial intelligence."
When this change received broad publicity on social media in August, Zoom was forced to clarify their stance and update their terms to avoid unrelenting backlash.
Why did this change raise serious alarms with Zoom’s user base? Because it violates user privacy. Information used to train generative Large Language Models (LLMs) can show up in the output, word by word or even verbatim paragraph by paragraph. If Zoom feeds your private call into their model, their model may feed parts of your private call to somebody else.
Prevail's software uses LLMs extensively, and with our background in legal, we understand the need for a secure, private, and reliable platform. We consider two high-level questions when evaluating a software brand:
- Does a company's behavior and culture show a strong, demonstrated history of putting customer safety and privacy first?
- Does the company deploy technology safely and carefully?
The Murky Waters of Behavioral Data and AI Training
Many software companies utilize users' video, audio, and written content to train their AI, often enhancing user experience. However, this practice can be unacceptable to those for whom data security, especially for sensitive legal or proprietary information, is not optional.
Opting out of Zoom's use of meeting content doesn't shield one from the gray area of "service generated data" either. This data, which Zoom considers their property, encompasses user behaviors such as platform interactions, feature usage, call durations, and location information.
Given the stringent confidentiality requirements of legal tech, platforms like Zoom, with their history of privacy concerns, might not be the best choice for remote legal procedures. If you've been relying on Zoom for remote or virtual depositions, a reassessment might be in order.
Trustworthiness in Legal Tech
In legal matters, particularly depositions, confidentiality is critical and non-negotiable. In this regard, Zoom's recent policy missteps underscore their unsuitability for sensitive legal contexts and highlights the risks of using generalized platforms for specialized tasks.
When Prevail was founded in 2019, its founders had already dedicated years to developing AI with a strong emphasis on privacy and security. Within the broad landscape of AI, there's a distinction between Generative AI, which creates new content, and Translational AI, which transforms existing content—think of turning Spanish text into English or converting spoken words into text—using LLMs. The output from Translational AI is readily verified, and free of recycled proprietary data.
We've observed that AI solutions which operate solely on automation increase risk. By integrating expert human review into the process, we can enhance the reliability and security of AI outputs. Our approach champions the idea that human-reviewed LLM outputs, generated by carefully trained models, are a more trustworthy method when dealing with sensitive and confidential data.
Zoom's Policy Roller Coaster Ride
The recent adjustments to Zoom's terms of service underscore a recurrent issue: data security in our digital age. While deceptive disclosure practices fueled the current uproar, the ensuing fallout forced Zoom to quickly amend its terms of service. They clarified that audio, video, or chat content wouldn't be used to train AI models without consent, and days later revised their policies again to state that such content wouldn't be used for these purposes at all.
News coverage regarding the policy change indicates that some users are still concerned with the notion of meeting hosts still being able to consent to data sharing. However, Zoom's recent blog and their August 11, 2023 terms of service update now assert that "Zoom does not use any of your audio, video, chat…to train Zoom or third-party artificial intelligence models.”
However, given Zoom's track record of treating customer safety and privacy as an impediment to business goals, it offers little reassurance with regards to potential future shifts in policy. Zoom’s terms also state: "You agree that Zoom may modify, delete, and make additions to its guides, statements, policies, and notices, with or without notice to you…"
Users are encouraged to remain alert to potential updates.
Dubious Practices: Zoom’s Legacy of Missteps
Zoom’s past is not devoid of privacy pitfalls either. Their track record includes controversies over end-to-end encryption practices, data-sharing with tech giants like Google and Facebook, and an $85 million settlement over privacy issues in 2021. As a result, the FTC mandated that Zoom introduce new security measures and scrutinize product updates for potential vulnerabilities.
Beyond these significant controversies, Zoom users have faced their fair share of challenges. Rampant during and after the pandemic, the widespread "Zoombombing" phenomenon disrupted countless meetings, garnering considerable media attention and leading to several lawsuits.
More recently, in 2022, a flaw in Zoom's macOS installer left users vulnerable to potential root access breaches, requiring multiple fixes and further emphasizing ongoing concerns about the platform's dedication to privacy.
Another notable lapse in judgment came in 2020 when Zoom introduced the "attendee attention tracker," which drew rightful criticism for its invasive nature, further underscoring the company's history of insensitivity towards privacy considerations.
Balancing Innovation with Trust: Prevail's Commitment to Privacy
Prevail recognizes the specific needs of the legal community and strives to earn our users’ trust by adhering to best practices, including end-to-end encryption, external security scans, and software auditing. We take these steps proactively, not reactively, and we never use your private data to train AI models in a way which would compromise confidential information.
In our tech-driven environment, the scales often tip towards innovation at the cost of privacy. Information security is a difficult challenge for everybody and it's also a shared one.
We welcome your questions and feedback, now and always.