Anthropic accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of conducting “industrial-scale campaigns” to steal data from its Claude AI models, the company disclosed on February 23, 2026. The alleged scheme involved 24,000 fraudulent accounts making over 16 million interactions to extract training data through a technique called model distillation, which Anthropic traced to the firms using IP addresses and infrastructure analysis.
The technique at the center of the alleged scheme, model distillation, involves systematically querying a powerful AI system to generate massive datasets of prompts and responses. These datasets can then train smaller models that replicate the original’s capabilities without accessing its proprietary architecture or training data. Anthropic stated the actions violated its terms of service and regional access restrictions, according to the company’s blog post.
The disclosure follows similar warnings from OpenAI, which had previously reported Chinese actors using its services to train competing models. Neither Anthropic nor OpenAI specified which versions of their models were targeted in these campaigns.
Evidence Points to Coordinated Operation
Anthropic said it attributed the campaigns to the three companies with “high confidence” through multiple forms of evidence. The company traced activity back to the firms using IP addresses and analyzed what it called a “hydra cluster” of distributed technical infrastructure. Industry partners corroborated similar patterns of behavior, Anthropic noted in its announcement.
Despite the serious nature of these allegations, DeepSeek, MiniMax, and Moonshot AI have not issued public statements or denials. The companies did not provide immediate comment when contacted by media outlets, according to The New York Times. No legal action or regulatory investigations have been publicly announced.
Industry-Wide Security Concerns
The incident highlights critical vulnerabilities in how AI companies protect their models from exploitation. Model distillation at this scale allows competitors to bypass the enormous research and computational costs required to build advanced AI systems, creating what industry experts see as unfair competitive advantages.
In response, Anthropic announced it would upgrade its systems to make such attacks “harder to execute and easier to detect.” Potential safeguards being explored across the industry include advanced anomaly detection for unusual API usage patterns, stricter rate limiting to prevent large-scale data harvesting, and research into watermarking techniques that could trace the origin of AI-generated outputs.
The allegations underscore mounting tensions between Western AI labs and Chinese competitors as the race for artificial intelligence supremacy intensifies. How companies protect their proprietary models while maintaining accessibility for legitimate users remains a critical challenge facing the rapidly evolving AI industry.
Sources
- The New York Times
- Anthropic


























