Back to Blog
AI & Development Tools

Cursor Composer 2 Is Fast, Cheap, and in a Licensing Dispute With a Chinese AI Company

Sophylabs Engineering
8 min read

Cursor just released the hottest AI coding tool of the year. Composer 2 beats Claude Opus 4.6 on every major benchmark, costs a tenth of the price, and ships with an interface developers genuinely love. There's just one problem: it's built on a Chinese open-source model, the provenance was never disclosed, and the original creators say Cursor is violating the license.

This is not a hypothetical concern. It's an active dispute with real implications for the millions of developers and tens of thousands of businesses that rely on Cursor every day. Here's what happened, what it means, and what you should do about it.

What Actually Happened

On March 19, Cursor launched Composer 2 with impressive numbers. A Terminal-Bench score of 61.7 compared to Claude Opus 4.6's 58.0. Pricing at $0.50 per million tokens versus Claude's $5.00. Faster inference, lower latency, and a slick integration into Cursor's existing editor.

What Cursor didn't disclose: Composer 2 is built on Kimi K2.5, a model developed by Moonshot AI, a Beijing-based AI company. Cursor never mentioned this publicly. Developers reverse-engineered the model fingerprints within days, and the news spread fast.

Moonshot AI has since accused Cursor of non-compliance with the Kimi K2.5 license terms. The specifics of the dispute remain partially private, but the core issue is clear: Cursor built a commercial product on top of an open-source model and did not meet the attribution and licensing requirements.

Why the Non-Disclosure Matters

Cursor is not a small indie tool. It has over 1 million daily active users, more than 50,000 businesses on paid plans, and a valuation of $29.3 billion. When a company that size builds on someone else's model without disclosure, it raises two distinct issues.

  • -Disclosure failure. Developers and businesses chose Cursor based on assumptions about the underlying technology. Many assumed Composer 2 was a proprietary model. The lack of transparency undermines trust, regardless of whether the model is technically good.
  • -License dispute. Open-source licenses have specific requirements for attribution, modification disclosure, and commercial use. If Cursor violated those terms, the legal exposure could force them to pull Composer 2 entirely, switch models, or negotiate a retroactive license.

The Pricing Disruption Is Real

Whatever happens with the licensing dispute, the pricing signal is impossible to ignore. Composer 2 offers near-equivalent performance at 10x lower cost. Let's look at real numbers.

  • -A typical Claude Code session generating a full-stack feature (API endpoint, database migration, frontend component, tests) costs roughly $2.50 to $4.00 in API tokens. The same task in Composer 2 costs $0.25 to $0.40.
  • -For a 10-person engineering team using AI coding tools heavily, the monthly API cost difference is significant: roughly $3,000 to $5,000 per month with Claude versus $300 to $500 with Composer 2.
  • -At this price point, automation decisions that were previously marginal become obvious. Tasks that weren't worth the API cost at $5 per million tokens are easily justified at $0.50. The 10x cost reduction doesn't just save money. It changes what teams decide to automate.

How Chinese Open-Source Models Got Here

Kimi K2.5 isn't an outlier. It's part of a wave of Chinese open-source models that have reached, and in some cases surpassed, Western commercial models.

  • -Kimi K2.5 (Moonshot AI) is purpose-built for code generation and editing tasks. It excels at understanding project context and generating multi-file changes.
  • -DeepSeek R2 pushed reasoning capabilities to new heights while remaining fully open-source with permissive licensing.
  • -Qwen 3 (Alibaba) and Baichuan 3 have both demonstrated competitive performance on standard benchmarks at a fraction of the training cost of Western equivalents.

US export controls have focused on restricting chip sales, not model weights. The result is that Chinese labs trained competitive models on available hardware and then released them openly. The models are here, they're good, and they're not going away.

What This Means If You're a Dev Team Evaluating AI Tools

The right response is to proceed with information, not panic. Here are the practical considerations.

  • -Your data. What code is being sent to the model? If you're using Composer 2, your code is being processed by infrastructure that may route through servers subject to different data privacy regulations. Understand where your data goes.
  • -Your clients' data. If you're an agency or consultancy, the code you write often contains client business logic, API keys in config files, and proprietary algorithms. Your AI tooling choices affect your clients, not just you.
  • -Your AI policy. If your company doesn't have a written policy on which AI tools are approved for use with production code, this is the moment to create one. The landscape is moving fast, and individual developers will make their own choices if the organization doesn't provide guidance.

Where We Stand at Sophylabs

We use Claude Code on production client projects. We chose it for three reasons: clear data handling policies, known model provenance, and a licensing structure we can verify. Those factors matter more to us than benchmark scores or token pricing.

We will evaluate Composer 2 if and when the licensing dispute resolves cleanly. If Cursor reaches an agreement with Moonshot AI, discloses the model provenance clearly, and provides transparent data handling documentation, we'll run it through our evaluation process like any other tool.

What we won't do is run client code through a tool with unresolved legal questions about its underlying model. That's not a political position. It's a professional one.

The Questions Worth Asking

Whether you're evaluating Composer 2 or any other AI coding tool, these are the questions that matter now.

  • -What model powers this tool? If the vendor won't tell you, that's a red flag. Model provenance should be public information for any tool that processes your code.
  • -Where was the model trained, and on what data? Training data composition affects output quality, bias, and potential intellectual property concerns. You have a right to know.
  • -What license governs the model? Open-source models come with specific license terms. If the tool vendor is in compliance, they should be able to demonstrate it clearly.
  • -Does your organization have an AI policy? If not, start with a simple one: which tools are approved, what data can be sent to them, and who makes those decisions. You can refine it later.

What Happens Next

There are three plausible scenarios for how this plays out.

  • -Agreement. Cursor and Moonshot AI reach a licensing deal. Cursor pays for commercial use, adds proper attribution, and Composer 2 continues as-is. This is the most likely outcome and would be the best for users.
  • -Model switch. Cursor replaces Kimi K2.5 with a different model (their own, or a properly licensed alternative). This would cause a temporary performance disruption but resolve the legal question permanently.
  • -Legal escalation. Moonshot AI pursues formal legal action. This would create uncertainty for months and could result in Composer 2 being pulled from the market during the proceedings. Unlikely, but possible.

Regardless of the outcome, this situation is a preview of what's coming for the entire AI tooling market. As open-source models from multiple countries become competitive with proprietary ones, questions about licensing, data handling, and transparency will only get more important. The teams that establish clear policies now will be better positioned than those that react later.

Need Clarity on AI Tooling for Your Project?

We make deliberate decisions about which AI tools touch client code. If you're building software and want a team that's transparent about tooling choices, let's talk.

Free 30-minute call | No commitment