“`html

An advanced phishing strategy known as CoPhish leverages Microsoft Copilot Studio to deceive individuals into granting hackers unauthorized entry to their Microsoft Entra ID accounts.

Identified by Datadog Security Labs, this technique utilizes adaptable AI agents hosted on legitimate Microsoft domains to wrap conventional OAuth consent schemes, creating a façade of trustworthiness and evading user wariness.

The assault, outlined in a recent analysis, underscores persistent vulnerabilities in cloud-based AI applications despite Microsoft’s initiatives to strengthen consent protocols.

By taking advantage of Copilot Studio’s adaptability, attackers can design seemingly harmless chatbots that solicit users for their login information, ultimately pilfering OAuth tokens for harmful activities such as reading emails or accessing calendar entries.

This update comes amidst the rapid advancement in AI services, where user-configurable options intended for productivity can unintentionally facilitate phishing. As organizations increasingly integrate tools like Copilot, such exploits emphasize the necessity for careful scrutiny of low-code platforms.

OAuth consent assaults, categorized under the MITRE ATT&CK technique T1528, involve enticing users into approving malicious app registrations that request extensive permissions to sensitive information.


google

Within Entra ID frameworks, attackers forge app registrations requesting access to Microsoft Graph resources, such as emails or OneNote, then coerce victims into consenting through phishing links. Once permission is granted, the resulting token bestows the attacker with impersonation privileges, enabling data theft or further exploitation.

Over the years, Microsoft has enhanced defenses, including restrictions on unverified applications in 2020 and a July 2025 update that designates “microsoft-user-default-recommended” as the standard policy, which inhibits consent for high-risk permissions like Sites.Read.All and Files.Read.All absent admin approval.

Nonetheless, vulnerabilities persist: non-privileged users can still approve internal applications for permissions like Mail.ReadWrite or Calendars.ReadWrite, and administrators with roles such as Application Administrator can consent to any permissions on any app.

A forthcoming modification in late-October 2025 will further limit these permissions but won’t entirely safeguard privileged users.

CoPhish Assault Leverages Copilot

In the CoPhish approach, cyber criminals construct a nefarious Copilot Studio agent, an adaptable chatbot utilizing a trial license within their own tenant or a compromised one, as indicated by Datadog noted.

The agent’s “Login” topic, a system workflow for authentication, is compromised with an HTTP request that exfiltrates the user’s OAuth token to a server controlled by the attacker post-consent.

The demo website feature disseminates the agent via a URL similar to copilotstudio.microsoft.com, imitating official Copilot services and circumventing basic domain checks.

malicious CopilotStudio page

The assault unfolds when a target clicks a disseminated link, encounters a familiar interface housing a “Login” button, and is rerouted to the fraudulent OAuth flow.

For internal objectives, the application calls for permissible scopes like Notes.ReadWrite; for administrators, it can request everything, including disallowed permissions. After approval, a validation code from token.botframework.com finalizes the process, but the token is quietly relayed often via Microsoft’s IPs, concealing it from user traffic logs.

Attackers can subsequently exploit the token for acts such as dispatching phishing emails or data exfiltration, all while avoiding detection by the victim. A diagram illustrates this process, depicting the agent issuing tokens post-consent for exfiltration.

Attack Chain
Attack Chain

To mitigate CoPhish, specialists recommend implementing custom consent policies beyond Microsoft’s defaults, prohibiting user app creation, and scrutinizing Entra ID audit logs for suspicious consents or modifications to Copilot.

This assault serves as a warning for developing AI platforms: their ease of customization heightens risks when integrated with identity systems. As cloud services expand, organizations must prioritize robust policies to defend against such hybrid threats.

“`