2024-10-24 –, Europe - Main Room
The good: There's an insider working at your competition, helping you.
The bad: There's also an insider working at your business, helping the competition.
The ugly: It's Microsoft Copilot.
The race to capture the benefits of GenAI is already at full speed, and everybody is diving head-first into putting corporate data and operations in the hands of AI. The concept of a Copilot has emerged as a way to keep AI tamed and under control. However, while employees rarely cross the lines and become rogue, it turns out that Microsoft Copilot is rogue by design.
In this talk, we will show how your Copilot Studio bots can easily be used to exfiltrate sensitive enterprise data circumventing existing controls like DLP. We will show how a combination of insecure defaults, over permissive plugins and wishful design thinking makes data leakage probable, not just possible. We will analyze how Copilot Studio puts enterprise data and operations in the hands of GenAI, and expose how this exacerbates the prompt injection attack surface, leading to material impact on integrity and confidentiality.
Next, we will drop CopilotHunter, a recon and exploitation tool that scans for publicly accessible Copilots and uses fuzzing and GenAI to abuse them to extract sensitive enterprise data. We will share our findings targeting thousands of accessible bots, revealing sensitive data and corporate credentials.
Finally, we will offer a path forward by sharing concrete configurations and mistakes to avoid on Microsoft’s platform, and generalized insights on how to build secure and reliable Copilots.
Outline:
1. Where we are
1.1. Everybody racing to build GenAI applications and not thinking about implications - Microsoft Copilots, Github Copilot, every major security vendor releasing a security Copilot
1.2. Common concerns that are being ignored - prompt injection, circumventing data classification, inherent uncertainty of what applications will choose to do in production
1.3. GenAI is a No Code movement - drag a few boxes and have your GenAI application ready to use
1.4. Distinction between Copilots and Agents - how Copilots aim to address concerns by tying AI actions to user interaction and therefore intentions
2. Intro to Microsoft Copilot Studio
2.1. Explain focus on Microsoft - tied to OpenAI and already built into every enterprise and their enterprise data
2.2. Brief intro to Copilot Studio - the platform that runs Microsoft Copilots and their extendability, and a platform to build your own Copilots on top of enterprise data
2.3. Capabilities - how GenAI gets plugged into enterprise data, with over 1500 data connectors and user impersonation by design
3. Breaking Copilots
3.1. A methodological breakdown of how Copilot Studio works and a threat analysis for Copilots built with that technology
3.2. User access to Copilot - showing how default configuration leads to publicly accessible bots and sharing with your entire organization. These defaults include: bot is Internet facing with no auth (yes, really), bot is shared with the entire organization (why not?), bot can require authentication but not enforce it (indeed), bot shares maker identity with bot users (identity is best when shared).
3.3. Copilot access to data - Microsoft claims that Copilots are secure because they inherit user permissions and controls. In practice this means user impersonation by by design, no way to distinguish between Copilot and user activity, and embedded credentials (including OAuth refresh tokens) leveraged implicitly without user knowledge
3.4. User-isolation breakdown - how one user can get Copilot to act on behalf of another
3.5. Data classification becomes obsolete - how Copilot can read a document classified sensitive and spew it out without the classification label. So long DLP and thanks for all the fish.
4. Exploitation
4.1. Copilot predictable misconfiguration lead to easy enumeration of publicly accessible copilots
4.2. Once you identify a publically accessible Copilot you need to extract data from it. Show how you can fight fire with fire, using GenAI to fuzz the Copilot into spewing sensitive enterprise data
4.3. Dropping CopilotHunter, a red teaming tool that automates all of the above
5. Where do we go now?
5.1. Bad default configuration in Microsoft Copilot Studio and how to avoid it - clear actionable changes to do today
5.2. Generalizing - how do we build secure and reliable Copilots? Separation of control plane and data plane, not putting too much power in the hands of AI, isolation of user-context
Inbar has been teaching and lecturing about Internet Security and Reverse Engineering for nearly as long as he has been doing that himself. He started programming at the age of 9 and Reverse Engineering at the age of 14. He spent most of his career in the Internet and Data Security field, and the only reason he's not in jail right now is because he chose the right side of the law at an early age.
Inbar specializes in an outside-the-box approach to analyzing security and finding vulnerabilities, using his extensive experience of close to 30 years. Nowadays, Inbar is the VP of Research at Zenity, the leading platform for securing and monitoring Low-Code/No-Code development.