Risk Update

Law Firm AI Risk — Professional, Privacy and Ethical Concerns Meet Unfolding Opportunity

Above the Law writes: “ChatGPT: A Menace To Privacy” —

  • “Open AI in its terms of use states the following:
    • You may provide input to the Services (“Input”), and receive output generated and returned by the Services based on the Input (“Output”). Input and Output are collectively “Content.” As between the parties and to the extent permitted by applicable law, you own all Input.
  • “The above statement is a little misleading. If the user owns all input, then input should not be stored in the database without the user’s permission. The terms of use are silent as to how this AI will be storing input and how a user can exercise their rights on the input that they own. Merely stating that the user owns input is not enough.”
  • “This should stand out as a big red flag for privacy professionals. Privacy laws around the globe are in place to protect an individual’s personal data or personally identifiable information.”
  • “As mentioned earlier, legal professionals are using this tool to proofread contracts and even draft contracts. As a result, they are exposing sensitive information and at times inadvertently entering personal information into the system as well. That information shall remain in ChatGPT’s possession to use for whatever purpose they chose to. Well, isn’t that illegal? Technically yes, but Open AI nowhere claims to be compliant with GDPR, CCPA/CPRA, or any other privacy standard for that matter. So the user assumes the risk when using the chatbot, giving ChatGPT an easy escape.”
  • “Here are a few ways ChatGPT could violate standard privacy regulations:
    • It does not state a legal basis for processing the personal information it receives.
    • Users are not given a mechanism to exercise their ‘right to be forgotten’ or “right to amend” personal information.
    • Personal information is stored indefinitely with no insight on how that data is secured and protected.
    • ChatGPT gathers information from unknown sources on the internet. If a user has any digital footprint, chances are ChatGPT knows a great deal about that user depending on what is available on the internet. This knowledge may be false, and the user has no recourse to correct, amend, or even delete the false information.”
  • “Here are some measures we can take while using ChatGPT:
    • Never enter personal information into ChatGPT. Always redact prior to exposing a legal document for review.
    • Never enter information about a client or customer. Always create vague scenarios unrelated to the client prior to asking ChatGPT for assistance.
    • Keep your questions broad and generic so that they cannot be tied to another individual.”

For example: “Samsung Software Engineers Busted for Pasting Proprietary Code Into ChatGPT” —

  • “Multiple employees of Samsung’s Korea-based semiconductor business plugged lines of confidential code into ChatGPT, effectively leaking corporate secrets that could be included in the chatbot’s future responses to other people around the world.”
  • “One employee copied buggy source code from a semiconductor database into the chatbot and asked it to identify a fix, according(Opens in a new window) to The Economist Korea. Another employee did the same for a different piece of equipment, requesting ‘code optimization’ from ChatGPT. After a third employee asked the AI model to summarize meeting notes, Samsung executives stepped in. The company limited each employee’s prompt to ChatGPT to 1,024 bytes.”
  • “The OpenAI user guide(Opens in a new window) warns users against this behavior: ‘We are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.’ It says the system uses all questions and text submitted to it as training data.”
  • “Samsung is reportedly considering building its own AI to prevent future mishaps, though engineers could likely bypass any measures by using ChatGPT on personal devices. Microsoft Bing and Google Bard can also detect bugs in lines of code, so banning ChatGPT is not a bulletproof solution.”

Eileen Garczynski of Ames & Gough shares: “ChatGPT and the Emerging AI Risk Landscape” —

  • “Al’s knowledge is limited since it’s based only on the information used to train it… As a result, employers cannot be certain that the information this technology provides or what it produces is accurate. In some cases, Al-generated errors can be costly, subjecting organizations to liability, government audits, fines and penalties. Employers would be wise to verify the information produced by Al tools before using it.”
  • “This technology can create potential privacy issues for organizations. For example, employees may share proprietary, confidential or trade secret information with ChatGPT (or a similar Al chatbot) which will then become part of its database and could be included in responses to other parties’ prompts… Before using Al technology, employers should consider reviewing and updating their confidentiality and trade secret policies to ensure they cover third-party Al tools.”
  • “Al-generated content can also potentially violate IP infringement laws. For example, if the chatbot generates content similar to existing copyrighted or trademarked material, the organization using that content could be held liable for infringement.”
  • Organizations can train employees on potential copyright, trademark and IP infringement issues or restrict access to Al tools to reduce legal risks.

Jeff Brandt noted: “New report on ChatGPT & generative AI in law firms shows opportunities abound, even as concerns persist” —

  • “A new report discusses the evolving attitudes towards the use of generative AI and ChatGPT within law firms, surveying lawyers about the opportunities and potential risks”
  • “It didn’t take long after OpenAI released its ChatGPT prototype for public use — shedding light on the myriad abilities that its underlying technology, generative artificial intelligence (AI), possessed — that many lawyers and legal industry experts became keenly aware of what these tools could mean for the profession and for law firms in particular.”
  • “In fact, a recent survey of law firm lawyers illustrated this dichotomy well — a large majority (82%) of those surveyed said they believe that ChatGPT and generative AI can be readily applied to legal work; and a slightly smaller majority (51%) said that ChatGPT and generative AI should be applied to legal work.”
  • “The survey, conducted in late-March by the Thomson Reuters Institute, gathered insight from more than 440 respondent lawyers at large and midsize law firms in the United States, United Kingdom, and Canada.”
  • “A large portion of respondents had concerns with use of ChatGPT and generative AI at work — 62%, which included 80% of partners or managing partners. Further, many of the concerns voiced in our survey seemed to revolve around the technology’s accuracy and security, most specifically about how law firms’ concerns of privacy and client confidentiality will be addressed.”
  • “Still, many legal industry observers (and many of our respondents) know that by any measure, we are still early in the game for generative AI and ChatGPTs. It is expected that time and experimentation will make users more comfortable with these tools, and a day will come when generative AI and ChatGPT is in as common use within law firms as online legal research and electronic contract signing are now.”