Microsoft 365 Copilot is here. Will it be a big timesaver for your business? Does using it present significant legal risks?
While Microsoft offers “Copilot” AI solutions in various settings, this column addresses just its 365 Copilot.
What Does Copilot Do?
Copilot adds generative AI capability to core Microsoft Office applications, such as Word, Outlook, Excel, Teams, and PowerPoint. It can be used to create, summarize, and analyze things in those applications. For example, purportedly, it can summarize what was said in a Teams video conference or an email chain in Outlook.
It became available on November 1 only to large enterprise licensees – ones for which a minimum of 300 Copilot individual licenses are purchased. It operates on top of Microsoft 365. Microsoft charges $30 per user per month for adding Copilot to 365. Microsoft says it intends to roll out Copilot to small 365 users in 2024 but hasn’t set a schedule.
Because my firm is small, I have not been able to try it yet. Still, in theory, it could be a powerful productivity booster for businesses using Microsoft.
For each individual user, to generate its output, Copilot can draw upon everything in the company's Microsoft ecosystem to which the user has at least viewing rights. It might do what generally available AI chatbots such as ChatGPT cannot do: produce an output that draws upon your materials.
ChatGPT’s training does not draw upon the user’s material except for what it learns from your prompts and the output it generates from them. It is believed that a significant part of ChatGPT’s training is material pulled from the Internet. And it was trained for general usage, not specific use cases.
For example, let’s say I wanted to have an AI draft a specific kind of contract. There are many online garbage contract forms, and they likely would influence what ChatGPT produces. Perhaps its draft would generate food for thought, but almost certainly it wouldn’t be a good starting point for something I would craft.
It would be a game changer if a generative AI could consider contracts I previously drafted to produce a draft. I have read Copilot can generate output based on your reference to up to three specific documents in addition to its general study of what is in the portion of the Microsoft ecosystem to which the user has at least viewing privileges. Thus, I’m eager to try Copilot when my small law firm is eligible.
But What About the Legal Risks? First, Confidentiality.
The biggest concern is confidentiality. With many generally available generative AIs, such as ChatGPT, anything you put in a prompt is used in the AI's training. That creates a risk that your input could appear in someone else’s output. Also, the AI provider can see your input and any output you generate. These things mean you should not put anything in a prompt that is confidential or sensitive, such as personally identifying information, private health information, confidential financial information, or attorney-client privileged information.
Microsoft promises that, with Copilot, your inputs and outputs are kept confidential. It says it will not use your input or output to train its system, which means the information will not be used to train its AI, and your input would not show up in the output of other Copilot users (at least outside of your company). That’s huge.
But there is a major catch: Microsoft says it captures and may access your Copilot prompts and outputs for 30 days. It operates an abuse monitoring system to review that material for violations of its code of conduct and “other applicable product terms.” It reserves the right for its human reviewers to review those stored prompts and outputs when they are flagged by its “abuse monitoring system.”
That’s a significant problem if the prompt or output may contain sensitive or confidential information, especially if you are required by law to maintain confidentiality or if you would lose confidentiality protection by giving a third party access to it. For example, under the law, generally speaking, divulging information shared between an attorney and client to a third party may void the attorney-client privilege to that material.
Microsoft says its customers who have special needs regarding inputs that may contain sensitive, confidential, or legally regulated input data can apply to Microsoft for an exemption from this abuse monitoring. I’m concerned that many users may be unaware of this major catch and could get burned by it in subsequent legal proceedings.
There are two other important caveats.
Microsoft says your data may leave the Microsoft 365 service boundary when you allow Copilot to reference public web content via Bing. The query sent to Bing might include your data in that scenario. But according to Microsoft, admins and users can prevent Copilot from referencing web content in their requests, which may be vital for certain data-sensitive users to know.
Microsoft also says your data may leave the Microsoft 365 service boundary when using plugins with Copilot to provide more relevant information. Again, admins and users have control over which plugins Copilot can access. Before allowing Copilot to access a plugin, Microsoft recommends that you check the enabled plugin's privacy statement and terms of use to determine how it will handle your data.
Copyright Infringement Risk
Microsoft also offers copyright-infringement coverage for the output from Copilot, with certain qualifications. Microsoft is doing this to insulate its licensees from the various copyright-infringement lawsuits being filed by content creators.
The concern is that the AI might produce an output highly similar to something it ingested in training – something the AI used in training (such as something scraped off the Internet) without the content creator's permission. If that happened, the output might be a copyright infringement. Even though Copilot studies your materials in its Microsoft ecosystem, its AI also carries forth its training from outside material.
There are important limitations on the copyright-infringement coverage for Copilot that Microsoft provides. A user must have its legal counsel and technology team review and understand these limitations.
Other Risks to Consider:
Over-Permissioning. For any individual user, Copilot draws upon everything in the company’s Microsoft ecosystem to which that individual user has at least viewing permission. Many companies are far too permissive in giving Microsoft privileges to individual employees. This over-permissioning is often accidental because of the company’s failure to carefully set all permissions. Because of this, Copilot could pull information into an employee’s Copilot output that the company didn’t intend for the employee to have access to.
Information Theft. Some also say Copilot will make it easier for employees to harvest and steal sensitive company information. I’m not enough of a technologist to assess that threat.
Confidentiality Between Clients. Setting aside the issue of over-permissioning, employees must carefully review all output to ensure it doesn’t violate confidentiality obligations. For example, perhaps Copilot might pull data from documents about Client A in building a sales pitch for Client B. You must make sure the first client's sensitive data isn’t disclosed.
Copyright Ownership. Also, while the law is just coming into shape, the early returns are that material generated by an AI can’t be anyone’s copyright property. If it’s important to the company using Copilot to own the copyright to the output, using Copilot may undercut copyright ownership. The company could still own the copyright to any minor modification and extension of the output done by a company employee, but that makes the copyright have a messy ownership picture.
Hallucinations. There will still be a problem with hallucinations. This occurs when an AI states in its output something that sounds authoritative but is wrong. An AI user either has to know whether the output of the AI is correct or has to spend the time to verify it. There also is a risk that bias or discrimination claims could arise from use of Copilot in some circumstances. For example, all sorts of statutes, regulations, and case law create the basis for bias or discrimination claims in employment and public accommodation settings, such as job applicant screening. Using an AI such as Copilot in such settings must conform to those laws.
Other Technical Risks. From what I have read, Copilot may also present some other technical risks that could become legal problems. I’ve read that it may not include appropriate data loss prevention labels in its output, which could raise the risk of unintentionally exposing sensitive data. I also read that it may lack a robust audit trail, which may hinder the ability to trace back actions to specific Copilot users.
The Big Picture
Yet, all technological systems come with some risks. There are technical and legal risks arising from using Microsoft 365 even without Copilot. You are still working in the cloud, and Microsoft has access to some information. You still have the issue of over-permissioning.
My biggest concern is whether small users will have the technical and legal support to use Copilot without undue risk. Someone must be on top of permissions and other technical settings. Some process must exist to vet outputs before distributing them outside the company. The company must understand the legal risks, such as the details of the copyright-infringement coverage Microsoft offers and whether the confidentiality and data protection is sufficient for that company’s specific situation. That’s a lot to manage.
Written on December 19, 2023
by John B. Farmer
© 2023 Leading-Edge Law Group, PLC. All rights reserved.