Skip to content
April 17, 2025

Lessons from GitHub's Breach: Protecting Content Used by M365 Copilot

Two folders and padlock in front of shield with Copilot logo

When news broke about GitHub Copilot exposing private repositories, it raised critical questions about the future of AI, security, and privacy. The story, detailed by Ars Technica, uncovered over 20,000 private repositories that were unintentionally made accessible—leaving sensitive data, from authentication credentials to private encryption keys, exposed to the world.

While this incident caused waves in the tech industry, it also gave businesses a moment for reflection. More importantly, it highlighted the need for robust safeguards in environments that use AI tools for processing sensitive content.

Here’s the good news for Microsoft 365 Copilot users: The specific issues tied to GitHub Copilot couldn’t happen in an M365 environment. Better still, Microsoft 365 Copilot offers tools to ensure your content remains secure while enhancing productivity. However, there are important lessons we can all learn from this breach.

Why Microsoft 365 Copilot Is Different

GitHub Copilot and Microsoft 365 Copilot may share a similar name, but they operate in entirely different contexts. The key difference? Microsoft 365 Copilot operates within a controlled environment, using permission-based access.

GitHub Copilot vs Microsoft 365 Copilot

Here’s how that works:

  • Permission Boundaries. Microsoft 365 Copilot uses permissions already set in your organization. If a user doesn’t have access to a particular file or document, they cannot use Copilot to retrieve or display it.
  • Semantic Indexing. This smart feature allows Copilot to better understand your organization’s terminology. For example, if you search for "remittance," it might also surface documents labeled "pay stubs" within the same permission boundaries.
  • Controlled Access. Unlike the public nature of some GitHub repositories, Microsoft 365 is built to protect private organizational data, even while leveraging AI.

Despite these advantages, the GitHub breach teaches us the importance of proactively managing permissions, limiting data access, and applying advanced security measures.

3 Lessons from the GitHub Breach

The GitHub Copilot incident drives home the importance of taking control of your content environment. To avoid potential risks, here are three steps Microsoft 365 Copilot users can take today:

1. Organize and Audit Permissions

Permissions often get messy. Over time, teams overshare, links get broken, and groups meant to be private unintentionally become public. Before deploying AI tools like Microsoft 365 Copilot, ensure your permissions are clean and intentional:

  • Conduct regular permissions audits to identify overly broad sharing settings
  • Address issues like public-facing groups that should remain private within your organization
  • Make permissions clear and functional—Copilot can only be as secure as the foundation you build for it

2. Scope Copilot’s Access with SharePoint Agents

Microsoft is introducing SharePoint Agents, which allow organizations to limit Copilot’s scope to specific content. This is a game-changer for both security and day-to-day usability.

For example:

  • Create a Copilot Agent that only pulls HR-related information for onboarding and benefits
  • Limit other Agents to specific departments like legal or marketing to ensure sensitive data isn’t crossing boundaries

Tailoring access by business use cases helps prevent unnecessary exposure while creating a streamlined experience for employees.

Ready to learn how SharePoint Agents can work for you? Watch our Finding the Needle in the Haystack with SharePoint Agents webinar

3. Use Sensitivity Labels and Advanced Security Measures

When it comes to confidential data, applying sensitivity labels at all levels—documents, sites, and teams—provides an extra layer of control. These labels can dictate how documents are shared, who they're shared with, and what security policies apply.

Best practices include:

  • Applying labels for sensitive documents such as contracts or employee information
  • Setting permissions down to individual files to avoid accidental sharing of confidential materials
  • Regularly updating your sensitivity labels to reflect new organizational needs or threats

Coupled with Microsoft 365 Copilot’s built-in safeguards, applying sensitivity labels can drastically lower the risk of unauthorized access.

Look Ahead: Building AI-Powered Trust

The GitHub Copilot breach was a wake-up call. While Microsoft 365 Copilot avoids many of the same pitfalls by design, we should take this as an opportunity to strengthen how we manage access, security, and AI-supported productivity.

AI tools are powerful allies in efficiency, creativity, and decision-making, but they’re only as good as the systems we create to protect our data. By keeping permissions tidy, leveraging advanced AI tools like SharePoint Agents, and using sensitivity labels strategically, we can create an environment where AI thrives safely.

What Steps Are You Taking?

The way forward is about proactive preparation, not reactionary panic. What policies or procedures does your organization have in place to manage permissions and sensitive content? Which of the above lessons could your team adopt today?

Want to Learn More?

With AI sweeping through industries, now's the time to take charge and secure your content environment. To learn more about just one of the recommendations above, watch our Finding the Needle in the Haystack with SharePoint Agents webinar.

Other posts you might be interested in

View All Posts